Test Report: Docker_Linux_crio_arm64 17866

                    
                      8c6a2e99755a9a0a7d8f4ed404c065becb2fd234:2024-01-08:32612
                    
                

Test fail (6/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 167.35
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.43
217 TestMultiNode/serial/PingHostFrom2Pods 4.48
239 TestRunningBinaryUpgrade 75.27
242 TestMissingContainerUpgrade 174.38
254 TestStoppedBinaryUpgrade/Upgrade 82.14
x
+
TestAddons/parallel/Ingress (167.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-260832 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-260832 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-260832 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e0763903-17e6-4ecd-87ad-59da751b44f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e0763903-17e6-4ecd-87ad-59da751b44f4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004200166s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-260832 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.485297954s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-260832 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.057080781s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-260832 addons disable ingress --alsologtostderr -v=1: (7.779034087s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-260832
helpers_test.go:235: (dbg) docker inspect addons-260832:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3dc1a846a66544d6495dda85d707934e4116da575659d1d11fc8d6e91c9e7c",
	        "Created": "2024-01-08T22:31:16.277878349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1153290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T22:31:16.594056269Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3167e60a71dbae425a4b9caa3fc8f52cf3c3b5035be6746ce0af2b692a3018d8",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3dc1a846a66544d6495dda85d707934e4116da575659d1d11fc8d6e91c9e7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3dc1a846a66544d6495dda85d707934e4116da575659d1d11fc8d6e91c9e7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3dc1a846a66544d6495dda85d707934e4116da575659d1d11fc8d6e91c9e7c/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3dc1a846a66544d6495dda85d707934e4116da575659d1d11fc8d6e91c9e7c/5e3dc1a846a66544d6495dda85d707934e4116da575659d1d11fc8d6e91c9e7c-json.log",
	        "Name": "/addons-260832",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-260832:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-260832",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c9a2dbfcf61bd4bb58ce77a2a3e50983e4047ca18d1982557f00951c33ecb3eb-init/diff:/var/lib/docker/overlay2/38e0010c12bf0b8a699570be0a9e49c2514b24d0012b6438a157027e46de7e51/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9a2dbfcf61bd4bb58ce77a2a3e50983e4047ca18d1982557f00951c33ecb3eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9a2dbfcf61bd4bb58ce77a2a3e50983e4047ca18d1982557f00951c33ecb3eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9a2dbfcf61bd4bb58ce77a2a3e50983e4047ca18d1982557f00951c33ecb3eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-260832",
	                "Source": "/var/lib/docker/volumes/addons-260832/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-260832",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-260832",
	                "name.minikube.sigs.k8s.io": "addons-260832",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9e3ad1fa96c944eff8f43a49509b54004f3a55813481b148566b21a3aaf0405",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34033"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34032"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34029"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34031"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34030"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b9e3ad1fa96c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-260832": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3dc1a846a6",
	                        "addons-260832"
	                    ],
	                    "NetworkID": "80050b30503dfe4924de83c49a6398c3b5f60b0e4127eed569a0ba644f350532",
	                    "EndpointID": "f269eb74c6844b8e0ba878c0e5a22ada2c189df6c1a424a642fb7d454322f43e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-260832 -n addons-260832
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-260832 logs -n 25: (1.619598674s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC | 08 Jan 24 22:30 UTC |
	| delete  | -p download-only-248557                                                                     | download-only-248557   | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC | 08 Jan 24 22:30 UTC |
	| delete  | -p download-only-248557                                                                     | download-only-248557   | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC | 08 Jan 24 22:30 UTC |
	| start   | --download-only -p                                                                          | download-docker-760773 | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |                     |
	|         | download-docker-760773                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-760773                                                                   | download-docker-760773 | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC | 08 Jan 24 22:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-765092   | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |                     |
	|         | binary-mirror-765092                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33297                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-765092                                                                     | binary-mirror-765092   | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC | 08 Jan 24 22:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |                     |
	|         | addons-260832                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |                     |
	|         | addons-260832                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-260832 --wait=true                                                                | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC | 08 Jan 24 22:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:33 UTC | 08 Jan 24 22:33 UTC |
	|         | -p addons-260832                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-260832 ip                                                                            | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:33 UTC | 08 Jan 24 22:33 UTC |
	| addons  | addons-260832 addons disable                                                                | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:33 UTC | 08 Jan 24 22:33 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:34 UTC | 08 Jan 24 22:34 UTC |
	|         | -p addons-260832                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:34 UTC | 08 Jan 24 22:34 UTC |
	|         | addons-260832                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-260832 ssh cat                                                                       | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:34 UTC | 08 Jan 24 22:34 UTC |
	|         | /opt/local-path-provisioner/pvc-c2c49568-2563-4197-8146-3703714a5804_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-260832 addons disable                                                                | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:34 UTC | 08 Jan 24 22:34 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-260832 addons                                                                        | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:34 UTC | 08 Jan 24 22:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	|         | addons-260832                                                                               |                        |         |         |                     |                     |
	| addons  | addons-260832 addons                                                                        | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-260832 addons                                                                        | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC | 08 Jan 24 22:35 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-260832 ssh curl -s                                                                   | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-260832 ip                                                                            | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:37 UTC | 08 Jan 24 22:37 UTC |
	| addons  | addons-260832 addons disable                                                                | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:37 UTC | 08 Jan 24 22:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-260832 addons disable                                                                | addons-260832          | jenkins | v1.32.0 | 08 Jan 24 22:37 UTC | 08 Jan 24 22:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:30:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:30:52.720008 1152828 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:30:52.720153 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:52.720163 1152828 out.go:309] Setting ErrFile to fd 2...
	I0108 22:30:52.720170 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:52.720433 1152828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 22:30:52.720909 1152828 out.go:303] Setting JSON to false
	I0108 22:30:52.721853 1152828 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18793,"bootTime":1704734260,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:30:52.721935 1152828 start.go:138] virtualization:  
	I0108 22:30:52.724255 1152828 out.go:177] * [addons-260832] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 22:30:52.727063 1152828 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:30:52.727220 1152828 notify.go:220] Checking for updates...
	I0108 22:30:52.728759 1152828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:30:52.730919 1152828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:30:52.733453 1152828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:30:52.735160 1152828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 22:30:52.736941 1152828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:30:52.739172 1152828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:30:52.763012 1152828 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:30:52.763162 1152828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:30:52.841592 1152828 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:30:52.831597255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:30:52.841697 1152828 docker.go:295] overlay module found
	I0108 22:30:52.845406 1152828 out.go:177] * Using the docker driver based on user configuration
	I0108 22:30:52.847904 1152828 start.go:298] selected driver: docker
	I0108 22:30:52.847923 1152828 start.go:902] validating driver "docker" against <nil>
	I0108 22:30:52.847936 1152828 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:30:52.848604 1152828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:30:52.909088 1152828 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:30:52.89984957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:30:52.909255 1152828 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 22:30:52.909495 1152828 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:30:52.911835 1152828 out.go:177] * Using Docker driver with root privileges
	I0108 22:30:52.914067 1152828 cni.go:84] Creating CNI manager for ""
	I0108 22:30:52.914087 1152828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:30:52.914099 1152828 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 22:30:52.914111 1152828 start_flags.go:321] config:
	{Name:addons-260832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-260832 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:30:52.917789 1152828 out.go:177] * Starting control plane node addons-260832 in cluster addons-260832
	I0108 22:30:52.919662 1152828 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:30:52.921634 1152828 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 22:30:52.923642 1152828 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:30:52.923692 1152828 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0108 22:30:52.923705 1152828 cache.go:56] Caching tarball of preloaded images
	I0108 22:30:52.923747 1152828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 22:30:52.923805 1152828 preload.go:174] Found /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0108 22:30:52.923816 1152828 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:30:52.924190 1152828 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/config.json ...
	I0108 22:30:52.924219 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/config.json: {Name:mkb22be598aaa45015ca9abf655c41191aca9c9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:30:52.940751 1152828 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 22:30:52.940900 1152828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory
	I0108 22:30:52.940920 1152828 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory, skipping pull
	I0108 22:30:52.940925 1152828 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in cache, skipping pull
	I0108 22:30:52.940933 1152828 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d as a tarball
	I0108 22:30:52.940938 1152828 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d from local cache
	I0108 22:31:08.967672 1152828 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d from cached tarball
	I0108 22:31:08.967715 1152828 cache.go:194] Successfully downloaded all kic artifacts
	I0108 22:31:08.967771 1152828 start.go:365] acquiring machines lock for addons-260832: {Name:mk91df8ff482240b824c6379307afe5d8c23c1cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:31:08.967901 1152828 start.go:369] acquired machines lock for "addons-260832" in 107.626µs
	I0108 22:31:08.967929 1152828 start.go:93] Provisioning new machine with config: &{Name:addons-260832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-260832 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:31:08.968013 1152828 start.go:125] createHost starting for "" (driver="docker")
	I0108 22:31:08.970622 1152828 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0108 22:31:08.970867 1152828 start.go:159] libmachine.API.Create for "addons-260832" (driver="docker")
	I0108 22:31:08.970933 1152828 client.go:168] LocalClient.Create starting
	I0108 22:31:08.971044 1152828 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem
	I0108 22:31:09.115251 1152828 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem
	I0108 22:31:09.824814 1152828 cli_runner.go:164] Run: docker network inspect addons-260832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 22:31:09.842635 1152828 cli_runner.go:211] docker network inspect addons-260832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 22:31:09.842731 1152828 network_create.go:281] running [docker network inspect addons-260832] to gather additional debugging logs...
	I0108 22:31:09.842754 1152828 cli_runner.go:164] Run: docker network inspect addons-260832
	W0108 22:31:09.859437 1152828 cli_runner.go:211] docker network inspect addons-260832 returned with exit code 1
	I0108 22:31:09.859472 1152828 network_create.go:284] error running [docker network inspect addons-260832]: docker network inspect addons-260832: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-260832 not found
	I0108 22:31:09.859486 1152828 network_create.go:286] output of [docker network inspect addons-260832]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-260832 not found
	
	** /stderr **
	I0108 22:31:09.859599 1152828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:31:09.877452 1152828 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400266fef0}
	I0108 22:31:09.877495 1152828 network_create.go:124] attempt to create docker network addons-260832 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 22:31:09.877555 1152828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-260832 addons-260832
	I0108 22:31:09.948386 1152828 network_create.go:108] docker network addons-260832 192.168.49.0/24 created
	I0108 22:31:09.948418 1152828 kic.go:121] calculated static IP "192.168.49.2" for the "addons-260832" container
	I0108 22:31:09.948489 1152828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 22:31:09.966044 1152828 cli_runner.go:164] Run: docker volume create addons-260832 --label name.minikube.sigs.k8s.io=addons-260832 --label created_by.minikube.sigs.k8s.io=true
	I0108 22:31:09.984792 1152828 oci.go:103] Successfully created a docker volume addons-260832
	I0108 22:31:09.984896 1152828 cli_runner.go:164] Run: docker run --rm --name addons-260832-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-260832 --entrypoint /usr/bin/test -v addons-260832:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 22:31:11.959387 1152828 cli_runner.go:217] Completed: docker run --rm --name addons-260832-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-260832 --entrypoint /usr/bin/test -v addons-260832:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib: (1.974445899s)
	I0108 22:31:11.959419 1152828 oci.go:107] Successfully prepared a docker volume addons-260832
	I0108 22:31:11.959444 1152828 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:31:11.959463 1152828 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 22:31:11.959546 1152828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-260832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 22:31:16.195658 1152828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-260832:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (4.236048506s)
	I0108 22:31:16.195693 1152828 kic.go:203] duration metric: took 4.236227 seconds to extract preloaded images to volume
	W0108 22:31:16.195839 1152828 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 22:31:16.195958 1152828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 22:31:16.261698 1152828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-260832 --name addons-260832 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-260832 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-260832 --network addons-260832 --ip 192.168.49.2 --volume addons-260832:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 22:31:16.602378 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Running}}
	I0108 22:31:16.622809 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:16.645362 1152828 cli_runner.go:164] Run: docker exec addons-260832 stat /var/lib/dpkg/alternatives/iptables
	I0108 22:31:16.705032 1152828 oci.go:144] the created container "addons-260832" has a running status.
	I0108 22:31:16.705066 1152828 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa...
	I0108 22:31:17.495638 1152828 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 22:31:17.531844 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:17.557184 1152828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 22:31:17.557207 1152828 kic_runner.go:114] Args: [docker exec --privileged addons-260832 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 22:31:17.643343 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:17.669392 1152828 machine.go:88] provisioning docker machine ...
	I0108 22:31:17.669490 1152828 ubuntu.go:169] provisioning hostname "addons-260832"
	I0108 22:31:17.669653 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:17.694818 1152828 main.go:141] libmachine: Using SSH client type: native
	I0108 22:31:17.695236 1152828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34033 <nil> <nil>}
	I0108 22:31:17.695248 1152828 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-260832 && echo "addons-260832" | sudo tee /etc/hostname
	I0108 22:31:17.848453 1152828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-260832
	
	I0108 22:31:17.848540 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:17.871480 1152828 main.go:141] libmachine: Using SSH client type: native
	I0108 22:31:17.871973 1152828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34033 <nil> <nil>}
	I0108 22:31:17.871998 1152828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-260832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-260832/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-260832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:31:18.015107 1152828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:31:18.015179 1152828 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 22:31:18.015220 1152828 ubuntu.go:177] setting up certificates
	I0108 22:31:18.015230 1152828 provision.go:83] configureAuth start
	I0108 22:31:18.015297 1152828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-260832
	I0108 22:31:18.035478 1152828 provision.go:138] copyHostCerts
	I0108 22:31:18.035568 1152828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 22:31:18.035704 1152828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 22:31:18.035777 1152828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 22:31:18.035837 1152828 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.addons-260832 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-260832]
	I0108 22:31:18.234809 1152828 provision.go:172] copyRemoteCerts
	I0108 22:31:18.234910 1152828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:31:18.234958 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:18.257802 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:18.355495 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:31:18.383768 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 22:31:18.412698 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:31:18.440835 1152828 provision.go:86] duration metric: configureAuth took 425.588087ms
	I0108 22:31:18.440862 1152828 ubuntu.go:193] setting minikube options for container-runtime
	I0108 22:31:18.441070 1152828 config.go:182] Loaded profile config "addons-260832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:31:18.441189 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:18.459556 1152828 main.go:141] libmachine: Using SSH client type: native
	I0108 22:31:18.459981 1152828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34033 <nil> <nil>}
	I0108 22:31:18.460003 1152828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:31:18.705877 1152828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:31:18.705907 1152828 machine.go:91] provisioned docker machine in 1.036492578s
	I0108 22:31:18.705918 1152828 client.go:171] LocalClient.Create took 9.734974771s
	I0108 22:31:18.705973 1152828 start.go:167] duration metric: libmachine.API.Create for "addons-260832" took 9.735105658s
	I0108 22:31:18.705989 1152828 start.go:300] post-start starting for "addons-260832" (driver="docker")
	I0108 22:31:18.706006 1152828 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:31:18.706109 1152828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:31:18.706175 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:18.724932 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:18.819925 1152828 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:31:18.824236 1152828 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 22:31:18.824274 1152828 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 22:31:18.824297 1152828 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 22:31:18.824305 1152828 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 22:31:18.824316 1152828 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 22:31:18.824384 1152828 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 22:31:18.824411 1152828 start.go:303] post-start completed in 118.415513ms
	I0108 22:31:18.824724 1152828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-260832
	I0108 22:31:18.842307 1152828 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/config.json ...
	I0108 22:31:18.842589 1152828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 22:31:18.842647 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:18.859689 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:18.951020 1152828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 22:31:18.956831 1152828 start.go:128] duration metric: createHost completed in 9.98880108s
	I0108 22:31:18.956858 1152828 start.go:83] releasing machines lock for "addons-260832", held for 9.988945316s
	I0108 22:31:18.956930 1152828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-260832
	I0108 22:31:18.974387 1152828 ssh_runner.go:195] Run: cat /version.json
	I0108 22:31:18.974408 1152828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:31:18.974440 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:18.974468 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:18.991877 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:19.002944 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:19.085608 1152828 ssh_runner.go:195] Run: systemctl --version
	I0108 22:31:19.221384 1152828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:31:19.365900 1152828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 22:31:19.371388 1152828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:31:19.398595 1152828 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 22:31:19.398739 1152828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:31:19.436916 1152828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 22:31:19.436986 1152828 start.go:475] detecting cgroup driver to use...
	I0108 22:31:19.437096 1152828 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 22:31:19.437159 1152828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:31:19.454460 1152828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:31:19.467981 1152828 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:31:19.468086 1152828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:31:19.485121 1152828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:31:19.502168 1152828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:31:19.609632 1152828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:31:19.708359 1152828 docker.go:219] disabling docker service ...
	I0108 22:31:19.708460 1152828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:31:19.731159 1152828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:31:19.745409 1152828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:31:19.847290 1152828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:31:19.959321 1152828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:31:19.972808 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:31:19.993934 1152828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:31:19.994049 1152828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:31:20.007477 1152828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:31:20.007641 1152828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:31:20.022751 1152828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:31:20.037136 1152828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:31:20.050543 1152828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:31:20.062620 1152828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:31:20.073571 1152828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:31:20.085133 1152828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:31:20.193474 1152828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:31:20.324053 1152828 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:31:20.324180 1152828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:31:20.328790 1152828 start.go:543] Will wait 60s for crictl version
	I0108 22:31:20.328885 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:31:20.333404 1152828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:31:20.377209 1152828 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 22:31:20.377348 1152828 ssh_runner.go:195] Run: crio --version
	I0108 22:31:20.422631 1152828 ssh_runner.go:195] Run: crio --version
	I0108 22:31:20.472672 1152828 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 22:31:20.474901 1152828 cli_runner.go:164] Run: docker network inspect addons-260832 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:31:20.495235 1152828 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 22:31:20.499969 1152828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:31:20.513308 1152828 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:31:20.513377 1152828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:31:20.580432 1152828 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:31:20.580457 1152828 crio.go:415] Images already preloaded, skipping extraction
	I0108 22:31:20.580516 1152828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:31:20.620506 1152828 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:31:20.620528 1152828 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:31:20.620607 1152828 ssh_runner.go:195] Run: crio config
	I0108 22:31:20.675078 1152828 cni.go:84] Creating CNI manager for ""
	I0108 22:31:20.675104 1152828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:31:20.675136 1152828 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:31:20.675186 1152828 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-260832 NodeName:addons-260832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:31:20.675385 1152828 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-260832"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:31:20.675502 1152828 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-260832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-260832 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:31:20.675593 1152828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:31:20.686410 1152828 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:31:20.686530 1152828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:31:20.697175 1152828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0108 22:31:20.718631 1152828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:31:20.740686 1152828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0108 22:31:20.762229 1152828 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 22:31:20.766803 1152828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:31:20.779991 1152828 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832 for IP: 192.168.49.2
	I0108 22:31:20.780076 1152828 certs.go:190] acquiring lock for shared ca certs: {Name:mk2f5e9ada40477437d91c2ac8d6b62bb5d1e97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:20.780231 1152828 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key
	I0108 22:31:21.293953 1152828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt ...
	I0108 22:31:21.293982 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt: {Name:mk92ec24c89080da14dfcc8a06cbadf6227cfa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:21.294520 1152828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key ...
	I0108 22:31:21.294537 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key: {Name:mk730ca5fe94b9d830fe3ad67b4a1de5be3a5883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:21.295583 1152828 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key
	I0108 22:31:21.619385 1152828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt ...
	I0108 22:31:21.619415 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt: {Name:mk1c4f3df6711d8b4609c5e88d0636a47cc43590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:21.620385 1152828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key ...
	I0108 22:31:21.620401 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key: {Name:mkdc056509f59e1d360c37afe8c3824a62424043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:21.620541 1152828 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.key
	I0108 22:31:21.620561 1152828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt with IP's: []
	I0108 22:31:21.903166 1152828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt ...
	I0108 22:31:21.903199 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: {Name:mk3dee880bb191c7f21b5432fb643a01ff3a1f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:21.903383 1152828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.key ...
	I0108 22:31:21.903394 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.key: {Name:mk9f77fd4700e8bcc625a2c7847261a055b00010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:21.903475 1152828 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.key.dd3b5fb2
	I0108 22:31:21.903495 1152828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:31:22.535964 1152828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.crt.dd3b5fb2 ...
	I0108 22:31:22.536007 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.crt.dd3b5fb2: {Name:mk58a481e072b016664650a28a93e09df2d065a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:22.536244 1152828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.key.dd3b5fb2 ...
	I0108 22:31:22.536261 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.key.dd3b5fb2: {Name:mk05f65a6de54fe1962fbb4c1651adb8e5dc7aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:22.536914 1152828 certs.go:337] copying /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.crt
	I0108 22:31:22.537027 1152828 certs.go:341] copying /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.key
	I0108 22:31:22.537087 1152828 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.key
	I0108 22:31:22.537106 1152828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.crt with IP's: []
	I0108 22:31:23.401767 1152828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.crt ...
	I0108 22:31:23.401800 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.crt: {Name:mkfbf611cec7018786032ec8d1b8508257bbf46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:23.402000 1152828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.key ...
	I0108 22:31:23.402015 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.key: {Name:mk17cae0035530eb4f0271933cab57ba92f78d45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:23.402814 1152828 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:31:23.402866 1152828 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:31:23.402895 1152828 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:31:23.402926 1152828 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem (1675 bytes)
	I0108 22:31:23.403640 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:31:23.435389 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:31:23.464306 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:31:23.493126 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 22:31:23.521135 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:31:23.549806 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:31:23.577934 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:31:23.605382 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:31:23.633667 1152828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:31:23.662684 1152828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:31:23.683873 1152828 ssh_runner.go:195] Run: openssl version
	I0108 22:31:23.691097 1152828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:31:23.702603 1152828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:31:23.707325 1152828 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:31:23.707424 1152828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:31:23.715802 1152828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:31:23.727425 1152828 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:31:23.731796 1152828 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:31:23.731844 1152828 kubeadm.go:404] StartCluster: {Name:addons-260832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-260832 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:31:23.731918 1152828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:31:23.731992 1152828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:31:23.773962 1152828 cri.go:89] found id: ""
	I0108 22:31:23.774079 1152828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:31:23.784743 1152828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:31:23.795449 1152828 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 22:31:23.795541 1152828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:31:23.806219 1152828 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:31:23.806262 1152828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 22:31:23.861312 1152828 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:31:23.861667 1152828 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:31:23.906849 1152828 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 22:31:23.906994 1152828 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 22:31:23.907075 1152828 kubeadm.go:322] OS: Linux
	I0108 22:31:23.907152 1152828 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 22:31:23.907234 1152828 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 22:31:23.907304 1152828 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 22:31:23.907386 1152828 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 22:31:23.907462 1152828 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 22:31:23.907548 1152828 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 22:31:23.907622 1152828 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 22:31:23.907709 1152828 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 22:31:23.907783 1152828 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 22:31:23.984033 1152828 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:31:23.984208 1152828 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:31:23.984337 1152828 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:31:24.248395 1152828 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:31:24.252327 1152828 out.go:204]   - Generating certificates and keys ...
	I0108 22:31:24.252430 1152828 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:31:24.252507 1152828 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:31:24.493717 1152828 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:31:25.367127 1152828 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:31:25.769573 1152828 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:31:26.073228 1152828 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:31:26.690674 1152828 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:31:26.690808 1152828 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-260832 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 22:31:27.377283 1152828 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:31:27.377562 1152828 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-260832 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 22:31:28.488958 1152828 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:31:29.053990 1152828 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:31:30.902268 1152828 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:31:30.902542 1152828 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:31:31.388252 1152828 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:31:32.194115 1152828 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:31:32.757418 1152828 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:31:33.131340 1152828 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:31:33.132405 1152828 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:31:33.135342 1152828 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:31:33.138332 1152828 out.go:204]   - Booting up control plane ...
	I0108 22:31:33.138442 1152828 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:31:33.138516 1152828 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:31:33.141016 1152828 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:31:33.152335 1152828 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:31:33.153378 1152828 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:31:33.153677 1152828 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:31:33.257532 1152828 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:31:40.260291 1152828 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002985 seconds
	I0108 22:31:40.260412 1152828 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:31:40.275948 1152828 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:31:40.803018 1152828 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:31:40.803206 1152828 kubeadm.go:322] [mark-control-plane] Marking the node addons-260832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:31:41.315791 1152828 kubeadm.go:322] [bootstrap-token] Using token: ugtckk.ijmocmndx4hkwchh
	I0108 22:31:41.317707 1152828 out.go:204]   - Configuring RBAC rules ...
	I0108 22:31:41.317828 1152828 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:31:41.325237 1152828 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:31:41.333861 1152828 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:31:41.338947 1152828 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:31:41.343905 1152828 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:31:41.349110 1152828 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:31:41.366730 1152828 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:31:41.623138 1152828 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:31:41.737650 1152828 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:31:41.739535 1152828 kubeadm.go:322] 
	I0108 22:31:41.739613 1152828 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:31:41.739626 1152828 kubeadm.go:322] 
	I0108 22:31:41.739704 1152828 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:31:41.739713 1152828 kubeadm.go:322] 
	I0108 22:31:41.739737 1152828 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:31:41.739797 1152828 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:31:41.739849 1152828 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:31:41.739857 1152828 kubeadm.go:322] 
	I0108 22:31:41.739909 1152828 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:31:41.739917 1152828 kubeadm.go:322] 
	I0108 22:31:41.739963 1152828 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:31:41.739971 1152828 kubeadm.go:322] 
	I0108 22:31:41.740021 1152828 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:31:41.740101 1152828 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:31:41.740167 1152828 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:31:41.740172 1152828 kubeadm.go:322] 
	I0108 22:31:41.740251 1152828 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:31:41.740323 1152828 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:31:41.740328 1152828 kubeadm.go:322] 
	I0108 22:31:41.740406 1152828 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ugtckk.ijmocmndx4hkwchh \
	I0108 22:31:41.740503 1152828 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 \
	I0108 22:31:41.740523 1152828 kubeadm.go:322] 	--control-plane 
	I0108 22:31:41.740533 1152828 kubeadm.go:322] 
	I0108 22:31:41.740613 1152828 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:31:41.740618 1152828 kubeadm.go:322] 
	I0108 22:31:41.740695 1152828 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ugtckk.ijmocmndx4hkwchh \
	I0108 22:31:41.740801 1152828 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 
	I0108 22:31:41.743627 1152828 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 22:31:41.743739 1152828 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:31:41.743760 1152828 cni.go:84] Creating CNI manager for ""
	I0108 22:31:41.743768 1152828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:31:41.745843 1152828 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 22:31:41.747668 1152828 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 22:31:41.764974 1152828 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 22:31:41.765011 1152828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 22:31:41.811501 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 22:31:42.807367 1152828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:31:42.807463 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:42.807488 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=addons-260832 minikube.k8s.io/updated_at=2024_01_08T22_31_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:42.995969 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:42.996048 1152828 ops.go:34] apiserver oom_adj: -16
	I0108 22:31:43.496980 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:43.996825 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:44.496774 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:44.996335 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:45.496807 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:45.996583 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:46.496543 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:46.996881 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:47.496637 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:47.996196 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:48.496111 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:48.996803 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:49.496111 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:49.996875 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:50.496823 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:50.996591 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:51.497069 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:51.996190 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:52.496747 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:52.996818 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:53.496069 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:53.997102 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:54.496947 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:54.996120 1152828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:31:55.140664 1152828 kubeadm.go:1088] duration metric: took 12.333260623s to wait for elevateKubeSystemPrivileges.
	I0108 22:31:55.140707 1152828 kubeadm.go:406] StartCluster complete in 31.408866409s
	I0108 22:31:55.140725 1152828 settings.go:142] acquiring lock: {Name:mk4ee991c68e71724ae577ac1a9a811b1b4e899c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:55.140849 1152828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:31:55.141244 1152828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/kubeconfig: {Name:mk4903c0deda408cf5380ebed8399fb64deac655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:31:55.141857 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:31:55.142126 1152828 config.go:182] Loaded profile config "addons-260832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:31:55.142238 1152828 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 22:31:55.142312 1152828 addons.go:69] Setting yakd=true in profile "addons-260832"
	I0108 22:31:55.142327 1152828 addons.go:237] Setting addon yakd=true in "addons-260832"
	I0108 22:31:55.142385 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.142842 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.144236 1152828 addons.go:69] Setting metrics-server=true in profile "addons-260832"
	I0108 22:31:55.144264 1152828 addons.go:237] Setting addon metrics-server=true in "addons-260832"
	I0108 22:31:55.144302 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.144741 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.145042 1152828 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-260832"
	I0108 22:31:55.145068 1152828 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-260832"
	I0108 22:31:55.145104 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.145492 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.145646 1152828 addons.go:69] Setting cloud-spanner=true in profile "addons-260832"
	I0108 22:31:55.145662 1152828 addons.go:237] Setting addon cloud-spanner=true in "addons-260832"
	I0108 22:31:55.145701 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.146074 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.148612 1152828 addons.go:69] Setting registry=true in profile "addons-260832"
	I0108 22:31:55.148641 1152828 addons.go:237] Setting addon registry=true in "addons-260832"
	I0108 22:31:55.148704 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.149148 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.150049 1152828 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-260832"
	I0108 22:31:55.150138 1152828 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-260832"
	I0108 22:31:55.150195 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.154132 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.164825 1152828 addons.go:69] Setting storage-provisioner=true in profile "addons-260832"
	I0108 22:31:55.164861 1152828 addons.go:237] Setting addon storage-provisioner=true in "addons-260832"
	I0108 22:31:55.164913 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.165514 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.175490 1152828 addons.go:69] Setting default-storageclass=true in profile "addons-260832"
	I0108 22:31:55.175529 1152828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-260832"
	I0108 22:31:55.175903 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.183831 1152828 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-260832"
	I0108 22:31:55.183882 1152828 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-260832"
	I0108 22:31:55.184252 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.188065 1152828 addons.go:69] Setting volumesnapshots=true in profile "addons-260832"
	I0108 22:31:55.188106 1152828 addons.go:237] Setting addon volumesnapshots=true in "addons-260832"
	I0108 22:31:55.188158 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.188705 1152828 addons.go:69] Setting gcp-auth=true in profile "addons-260832"
	I0108 22:31:55.188733 1152828 mustload.go:65] Loading cluster: addons-260832
	I0108 22:31:55.188914 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.188931 1152828 config.go:182] Loaded profile config "addons-260832": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:31:55.189256 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.207374 1152828 addons.go:69] Setting ingress=true in profile "addons-260832"
	I0108 22:31:55.207410 1152828 addons.go:237] Setting addon ingress=true in "addons-260832"
	I0108 22:31:55.207483 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.207928 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.248824 1152828 addons.go:69] Setting ingress-dns=true in profile "addons-260832"
	I0108 22:31:55.248859 1152828 addons.go:237] Setting addon ingress-dns=true in "addons-260832"
	I0108 22:31:55.248926 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.249407 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.273646 1152828 addons.go:69] Setting inspektor-gadget=true in profile "addons-260832"
	I0108 22:31:55.273717 1152828 addons.go:237] Setting addon inspektor-gadget=true in "addons-260832"
	I0108 22:31:55.273785 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.274249 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.292465 1152828 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 22:31:55.295870 1152828 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:31:55.295967 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:31:55.296071 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.337858 1152828 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 22:31:55.342267 1152828 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 22:31:55.344935 1152828 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 22:31:55.345016 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 22:31:55.345108 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.352587 1152828 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 22:31:55.354348 1152828 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:31:55.354367 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 22:31:55.354438 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.387889 1152828 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 22:31:55.390319 1152828 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 22:31:55.390373 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 22:31:55.390459 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.409788 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 22:31:55.411895 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 22:31:55.414269 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 22:31:55.413094 1152828 addons.go:237] Setting addon default-storageclass=true in "addons-260832"
	I0108 22:31:55.423199 1152828 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 22:31:55.417208 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.439164 1152828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:31:55.441399 1152828 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:31:55.441417 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:31:55.441481 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.439790 1152828 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-260832"
	I0108 22:31:55.441736 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.442194 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.431331 1152828 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 22:31:55.475464 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 22:31:55.475533 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.495050 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 22:31:55.487736 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:31:55.431223 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 22:31:55.431802 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:31:55.506593 1152828 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 22:31:55.506632 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 22:31:55.506703 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.519970 1152828 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 22:31:55.521505 1152828 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:31:55.521526 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 22:31:55.521589 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.543914 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 22:31:55.547745 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 22:31:55.549552 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 22:31:55.552256 1152828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 22:31:55.554328 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:31:55.559613 1152828 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 22:31:55.567966 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 22:31:55.568024 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 22:31:55.568125 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.558929 1152828 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 22:31:55.568398 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 22:31:55.568441 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.554379 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.558951 1152828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 22:31:55.600698 1152828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:31:55.610127 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.613095 1152828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:31:55.617279 1152828 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:31:55.617309 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 22:31:55.617381 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.632109 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.652104 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.668352 1152828 out.go:177]   - Using image docker.io/busybox:stable
	I0108 22:31:55.675341 1152828 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 22:31:55.680572 1152828 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:31:55.680591 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 22:31:55.680657 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.676657 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.680422 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.708170 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.714935 1152828 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-260832" context rescaled to 1 replicas
	I0108 22:31:55.714967 1152828 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:31:55.716690 1152828 out.go:177] * Verifying Kubernetes components...
	I0108 22:31:55.718541 1152828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:31:55.738956 1152828 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:31:55.738976 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:31:55.739039 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:31:55.766820 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.776479 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.781035 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.821112 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.832257 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:31:55.835637 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	W0108 22:31:55.837265 1152828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0108 22:31:55.837298 1152828 retry.go:31] will retry after 307.411891ms: ssh: handshake failed: EOF
	I0108 22:31:56.055694 1152828 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:31:56.055764 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 22:31:56.087149 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:31:56.108570 1152828 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 22:31:56.108640 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 22:31:56.196419 1152828 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:31:56.196489 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:31:56.210187 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 22:31:56.210254 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 22:31:56.232265 1152828 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:31:56.232337 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 22:31:56.236250 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 22:31:56.247525 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:31:56.311299 1152828 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 22:31:56.311372 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 22:31:56.334971 1152828 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 22:31:56.334999 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 22:31:56.338483 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:31:56.340617 1152828 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 22:31:56.340678 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 22:31:56.379997 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:31:56.383719 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:31:56.388282 1152828 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:31:56.388347 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:31:56.396570 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:31:56.436431 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 22:31:56.436455 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 22:31:56.468591 1152828 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 22:31:56.468658 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 22:31:56.546446 1152828 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 22:31:56.546514 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 22:31:56.564230 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:31:56.564894 1152828 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 22:31:56.564939 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 22:31:56.622059 1152828 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 22:31:56.622132 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 22:31:56.658871 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 22:31:56.658938 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 22:31:56.663207 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:31:56.725939 1152828 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 22:31:56.726007 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 22:31:56.798073 1152828 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 22:31:56.798142 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 22:31:56.811008 1152828 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 22:31:56.811077 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 22:31:56.907261 1152828 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 22:31:56.907331 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 22:31:56.932237 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 22:31:56.932307 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 22:31:57.018260 1152828 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 22:31:57.018335 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 22:31:57.064173 1152828 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:31:57.064249 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 22:31:57.112044 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 22:31:57.112120 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 22:31:57.115472 1152828 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:31:57.115540 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 22:31:57.191744 1152828 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 22:31:57.191804 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 22:31:57.273634 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:31:57.307846 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:31:57.316122 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 22:31:57.316194 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 22:31:57.370820 1152828 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 22:31:57.370896 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 22:31:57.471780 1152828 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:31:57.471840 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 22:31:57.535215 1152828 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 22:31:57.535284 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 22:31:57.596687 1152828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.058053125s)
	I0108 22:31:57.596762 1152828 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 22:31:57.596804 1152828 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.878243802s)
	I0108 22:31:57.597804 1152828 node_ready.go:35] waiting up to 6m0s for node "addons-260832" to be "Ready" ...
	I0108 22:31:57.610196 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:31:57.745411 1152828 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 22:31:57.745485 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 22:31:57.905607 1152828 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:31:57.905676 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 22:31:58.036030 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:31:58.639703 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.552513402s)
	I0108 22:31:59.654884 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:31:59.828790 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.592444741s)
	I0108 22:32:01.016508 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.677943385s)
	I0108 22:32:01.016649 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.769054566s)
	I0108 22:32:01.078173 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.694382815s)
	I0108 22:32:01.078252 1152828 addons.go:473] Verifying addon registry=true in "addons-260832"
	I0108 22:32:01.080876 1152828 out.go:177] * Verifying registry addon...
	I0108 22:32:01.078403 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.698031717s)
	I0108 22:32:01.083542 1152828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 22:32:01.154089 1152828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:32:01.154730 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:01.592427 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:01.740853 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.176537773s)
	I0108 22:32:01.740936 1152828 addons.go:473] Verifying addon metrics-server=true in "addons-260832"
	I0108 22:32:01.741008 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.077714382s)
	I0108 22:32:01.741291 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.467581424s)
	I0108 22:32:01.745134 1152828 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-260832 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 22:32:01.741521 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.344862036s)
	I0108 22:32:01.741700 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.131430527s)
	I0108 22:32:01.741806 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.433716201s)
	W0108 22:32:01.747597 1152828 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:32:01.747649 1152828 retry.go:31] will retry after 350.105888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:32:01.747699 1152828 addons.go:473] Verifying addon ingress=true in "addons-260832"
	I0108 22:32:01.750028 1152828 out.go:177] * Verifying ingress addon...
	I0108 22:32:01.752721 1152828 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 22:32:01.764913 1152828 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 22:32:01.764938 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:02.087473 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.051338489s)
	I0108 22:32:02.087557 1152828 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-260832"
	I0108 22:32:02.089623 1152828 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 22:32:02.093235 1152828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 22:32:02.098177 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:32:02.110125 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:02.115076 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:02.120096 1152828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:32:02.120119 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:02.256716 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:02.365926 1152828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 22:32:02.366070 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:32:02.396141 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:32:02.547728 1152828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 22:32:02.597319 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:02.599735 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:02.602386 1152828 addons.go:237] Setting addon gcp-auth=true in "addons-260832"
	I0108 22:32:02.602449 1152828 host.go:66] Checking if "addons-260832" exists ...
	I0108 22:32:02.602923 1152828 cli_runner.go:164] Run: docker container inspect addons-260832 --format={{.State.Status}}
	I0108 22:32:02.632543 1152828 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 22:32:02.632599 1152828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-260832
	I0108 22:32:02.651954 1152828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/addons-260832/id_rsa Username:docker}
	I0108 22:32:02.767926 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:03.089406 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:03.116577 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:03.266151 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:03.536936 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.438708463s)
	I0108 22:32:03.539718 1152828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:32:03.542015 1152828 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 22:32:03.544220 1152828 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 22:32:03.544246 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 22:32:03.588520 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:03.597947 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:03.605528 1152828 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 22:32:03.605563 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 22:32:03.645359 1152828 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:32:03.645382 1152828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 22:32:03.674052 1152828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:32:03.757474 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:04.088413 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:04.128984 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:04.129762 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:04.256930 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:04.631611 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:04.634192 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:04.772765 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:04.969430 1152828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.295327866s)
	I0108 22:32:04.972350 1152828 addons.go:473] Verifying addon gcp-auth=true in "addons-260832"
	I0108 22:32:04.975129 1152828 out.go:177] * Verifying gcp-auth addon...
	I0108 22:32:04.978213 1152828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 22:32:04.989556 1152828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 22:32:04.989582 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:05.088770 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:05.111283 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:05.261563 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:05.482944 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:05.588837 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:05.600506 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:05.758538 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:05.982426 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:06.089768 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:06.106028 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:06.257705 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:06.486174 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:06.588860 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:06.598186 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:06.603119 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:06.757542 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:06.982548 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:07.094052 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:07.105626 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:07.258226 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:07.482153 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:07.591627 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:07.598658 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:07.757565 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:07.982684 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:08.091063 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:08.109936 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:08.258119 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:08.482235 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:08.588371 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:08.598313 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:08.757173 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:08.981963 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:09.098925 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:09.115390 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:09.116368 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:09.257566 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:09.482199 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:09.588633 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:09.598726 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:09.759828 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:09.982513 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:10.101007 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:10.113478 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:10.256952 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:10.482582 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:10.587816 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:10.598084 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:10.757596 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:10.983048 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:11.089302 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:11.123069 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:11.257525 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:11.482310 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:11.588080 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:11.598362 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:11.601393 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:11.757861 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:11.986769 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:12.088953 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:12.105764 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:12.256879 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:12.482727 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:12.588513 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:12.603277 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:12.757717 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:12.982846 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:13.088740 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:13.103413 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:13.257473 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:13.482035 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:13.587717 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:13.599519 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:13.607719 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:13.757397 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:13.982075 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:14.088087 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:14.103098 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:14.256985 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:14.481924 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:14.587508 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:14.598837 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:14.757477 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:14.981575 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:15.089151 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:15.099608 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:15.257291 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:15.482123 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:15.587863 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:15.598095 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:15.757015 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:15.981519 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:16.087649 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:16.099245 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:16.102189 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:16.257225 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:16.482154 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:16.588938 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:16.597644 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:16.757694 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:16.982755 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:17.088849 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:17.100580 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:17.257787 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:17.482557 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:17.591010 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:17.597810 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:17.757127 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:17.981956 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:18.087990 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:18.100577 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:18.256801 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:18.482875 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:18.587926 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:18.598217 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:18.601203 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:18.757738 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:18.982723 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:19.088941 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:19.099659 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:19.257602 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:19.482373 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:19.587980 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:19.597924 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:19.757833 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:19.981660 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:20.089588 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:20.103973 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:20.257306 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:20.482321 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:20.588000 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:20.598301 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:20.601307 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:20.757456 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:20.982239 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:21.088775 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:21.101690 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:21.257732 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:21.482609 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:21.588855 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:21.597603 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:21.757245 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:21.982000 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:22.087841 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:22.103080 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:22.257304 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:22.482065 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:22.587746 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:22.597988 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:22.757932 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:22.981979 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:23.087828 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:23.099308 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:23.102402 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:23.257716 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:23.482495 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:23.587586 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:23.597492 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:23.757221 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:23.982019 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:24.088237 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:24.099300 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:24.257218 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:24.481871 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:24.588103 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:24.598193 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:24.757269 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:24.981893 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:25.106739 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:25.113921 1152828 node_ready.go:58] node "addons-260832" has status "Ready":"False"
	I0108 22:32:25.114956 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:25.262394 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:25.493165 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:25.606729 1152828 node_ready.go:49] node "addons-260832" has status "Ready":"True"
	I0108 22:32:25.606797 1152828 node_ready.go:38] duration metric: took 28.008932745s waiting for node "addons-260832" to be "Ready" ...
	I0108 22:32:25.606837 1152828 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:32:25.621858 1152828 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:32:25.621938 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:25.634434 1152828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:32:25.634507 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:25.638667 1152828 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nqqkm" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:25.812615 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:25.998039 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:26.119690 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:26.136546 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:26.258619 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:26.508895 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:26.590883 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:26.603591 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:26.764491 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:26.982618 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:27.091494 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:27.100582 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:27.257604 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:27.484220 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:27.590235 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:27.599168 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:27.647138 1152828 pod_ready.go:92] pod "coredns-5dd5756b68-nqqkm" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:27.647166 1152828 pod_ready.go:81] duration metric: took 2.008431328s waiting for pod "coredns-5dd5756b68-nqqkm" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.647195 1152828 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.667387 1152828 pod_ready.go:92] pod "etcd-addons-260832" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:27.667415 1152828 pod_ready.go:81] duration metric: took 20.200577ms waiting for pod "etcd-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.667438 1152828 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.677505 1152828 pod_ready.go:92] pod "kube-apiserver-addons-260832" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:27.677545 1152828 pod_ready.go:81] duration metric: took 10.098606ms waiting for pod "kube-apiserver-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.677560 1152828 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.685970 1152828 pod_ready.go:92] pod "kube-controller-manager-addons-260832" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:27.685989 1152828 pod_ready.go:81] duration metric: took 8.420439ms waiting for pod "kube-controller-manager-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.686003 1152828 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ts9nw" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.700318 1152828 pod_ready.go:92] pod "kube-proxy-ts9nw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:27.700354 1152828 pod_ready.go:81] duration metric: took 14.343239ms waiting for pod "kube-proxy-ts9nw" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.700366 1152828 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:27.758379 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:27.983048 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:28.042870 1152828 pod_ready.go:92] pod "kube-scheduler-addons-260832" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:28.042944 1152828 pod_ready.go:81] duration metric: took 342.569134ms waiting for pod "kube-scheduler-addons-260832" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:28.042972 1152828 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:28.091018 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:28.102817 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:28.257138 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:28.484926 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:28.596244 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:28.602060 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:28.758893 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:28.982481 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:29.096761 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:29.105185 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:29.266124 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:29.481778 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:29.588763 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:29.599962 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:29.760275 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:29.982596 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:30.057949 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:30.090969 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:30.104852 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:30.259176 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:30.482837 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:30.589427 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:30.599287 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:30.761889 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:30.986860 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:31.098792 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:31.117341 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:31.257677 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:31.483355 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:31.588906 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:31.599170 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:31.757635 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:31.981891 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:32.089932 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:32.099483 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:32.258113 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:32.481959 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:32.550173 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:32.588297 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:32.598851 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:32.758488 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:32.983516 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:33.088546 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:33.101361 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:33.258064 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:33.482589 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:33.589768 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:33.602420 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:33.758272 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:33.982721 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:34.088967 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:34.100445 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:34.258774 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:34.482665 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:34.550365 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:34.589542 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:34.598927 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:34.757664 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:34.983015 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:35.091654 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:35.105538 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:35.258455 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:35.483211 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:35.588858 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:35.600206 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:35.757948 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:35.982354 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:36.088703 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:36.101798 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:36.257567 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:36.482102 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:36.588870 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:36.598735 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:36.762806 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:36.982166 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:37.061501 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:37.099916 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:37.108615 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:37.258009 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:37.482976 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:37.591108 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:37.601556 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:37.766671 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:37.991653 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:38.098521 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:38.106746 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:38.257968 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:38.483379 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:38.589183 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:38.601580 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:38.758001 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:38.982877 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:39.090032 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:39.102693 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:39.258222 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:39.487623 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:39.555214 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:39.589763 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:39.604567 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:39.758177 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:39.982259 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:40.091212 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:40.126619 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:40.258674 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:40.482662 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:40.589154 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:40.600576 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:40.759681 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:40.983105 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:41.090347 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:41.117911 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:41.258309 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:41.483191 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:41.592647 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:41.608043 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:41.757934 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:41.982585 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:42.051074 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:42.094516 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:42.106205 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:42.259433 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:42.482623 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:42.593140 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:42.609927 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:42.757984 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:42.983314 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:43.095916 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:43.118812 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:43.265238 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:43.482054 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:43.590233 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:43.601222 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:43.763947 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:43.983153 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:44.093206 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:44.107654 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:44.257568 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:44.482795 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:44.552060 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:44.588289 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:44.598923 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:44.757256 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:44.981826 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:45.100662 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:45.109471 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:45.260179 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:45.485575 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:45.589256 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:45.599910 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:45.757998 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:45.981626 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:46.119602 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:46.145208 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:46.257730 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:46.485741 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:46.588325 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:46.599074 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:46.758890 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:46.982500 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:47.050055 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:47.089100 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:47.107008 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:47.265704 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:47.486914 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:47.589614 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:47.600605 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:47.761475 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:47.982986 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:48.090921 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:48.104468 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:48.258537 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:48.483023 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:48.589740 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:48.600914 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:48.758775 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:48.983880 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:49.053487 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:49.095736 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:49.110739 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:49.263944 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:49.482875 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:49.612140 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:49.631623 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:49.758944 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:49.983042 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:50.095495 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:50.104792 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:50.261838 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:50.482981 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:50.607243 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:50.668408 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:50.759724 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:50.987222 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:51.089636 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:51.123606 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:51.258734 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:51.482894 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:51.551098 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:51.588639 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:51.600667 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:51.759789 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:51.982762 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:52.114018 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:52.122290 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:52.257804 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:52.483132 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:52.589443 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:52.603700 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:52.757649 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:52.982368 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:53.089289 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:53.103285 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:53.259559 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:53.481888 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:53.589175 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:53.599260 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:53.762535 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:53.982731 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:54.054911 1152828 pod_ready.go:102] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"False"
	I0108 22:32:54.101040 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:54.135388 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:54.260929 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:54.483472 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:54.599765 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:54.641081 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:54.759550 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:54.983694 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:55.089912 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:55.109397 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:55.258250 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:55.485144 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:55.588771 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:55.599238 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:55.765457 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:55.997518 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:56.110047 1152828 pod_ready.go:92] pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:56.110068 1152828 pod_ready.go:81] duration metric: took 28.067073472s waiting for pod "metrics-server-7c66d45ddc-42csc" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:56.110080 1152828 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ljg8m" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:56.111353 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:56.114650 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:56.137839 1152828 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ljg8m" in "kube-system" namespace has status "Ready":"True"
	I0108 22:32:56.137910 1152828 pod_ready.go:81] duration metric: took 26.532761ms waiting for pod "nvidia-device-plugin-daemonset-ljg8m" in "kube-system" namespace to be "Ready" ...
	I0108 22:32:56.137948 1152828 pod_ready.go:38] duration metric: took 30.531080243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:32:56.137994 1152828 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:32:56.138041 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:32:56.138126 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:32:56.244173 1152828 cri.go:89] found id: "e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460"
	I0108 22:32:56.244241 1152828 cri.go:89] found id: ""
	I0108 22:32:56.244262 1152828 logs.go:284] 1 containers: [e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460]
	I0108 22:32:56.244347 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:32:56.249455 1152828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:32:56.249582 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:32:56.259796 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:56.318481 1152828 cri.go:89] found id: "d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939"
	I0108 22:32:56.318547 1152828 cri.go:89] found id: ""
	I0108 22:32:56.318568 1152828 logs.go:284] 1 containers: [d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939]
	I0108 22:32:56.318655 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:32:56.329701 1152828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:32:56.329869 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:32:56.402665 1152828 cri.go:89] found id: "6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8"
	I0108 22:32:56.402732 1152828 cri.go:89] found id: ""
	I0108 22:32:56.402754 1152828 logs.go:284] 1 containers: [6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8]
	I0108 22:32:56.402843 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:32:56.414352 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:32:56.414472 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:32:56.469827 1152828 cri.go:89] found id: "ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e"
	I0108 22:32:56.469850 1152828 cri.go:89] found id: ""
	I0108 22:32:56.469858 1152828 logs.go:284] 1 containers: [ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e]
	I0108 22:32:56.469940 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:32:56.482562 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:32:56.482642 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:32:56.484597 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:56.561972 1152828 cri.go:89] found id: "324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89"
	I0108 22:32:56.561998 1152828 cri.go:89] found id: ""
	I0108 22:32:56.562007 1152828 logs.go:284] 1 containers: [324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89]
	I0108 22:32:56.562087 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:32:56.567854 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:32:56.567964 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:32:56.591462 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:56.603435 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:56.630336 1152828 cri.go:89] found id: "f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa"
	I0108 22:32:56.630369 1152828 cri.go:89] found id: ""
	I0108 22:32:56.630377 1152828 logs.go:284] 1 containers: [f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa]
	I0108 22:32:56.630430 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:32:56.641980 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:32:56.642064 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:32:56.697260 1152828 cri.go:89] found id: "72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0"
	I0108 22:32:56.697295 1152828 cri.go:89] found id: ""
	I0108 22:32:56.697303 1152828 logs.go:284] 1 containers: [72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0]
	I0108 22:32:56.697375 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:32:56.702123 1152828 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:32:56.702162 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:32:56.757961 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:56.799233 1152828 logs.go:123] Gathering logs for container status ...
	I0108 22:32:56.799270 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:32:56.868923 1152828 logs.go:123] Gathering logs for kubelet ...
	I0108 22:32:56.868953 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 22:32:56.916194 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.414213    1355 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-260832' and this object
	W0108 22:32:56.916410 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.414288    1355 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-260832' and this object
	W0108 22:32:56.919162 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.477202    1355 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:32:56.919357 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.477233    1355 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:32:56.923411 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:32:56.923622 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484343    1355 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:32:56.923802 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:32:56.924003 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484364    1355 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	I0108 22:32:56.969934 1152828 logs.go:123] Gathering logs for kube-apiserver [e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460] ...
	I0108 22:32:56.969975 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460"
	I0108 22:32:56.987104 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:57.054972 1152828 logs.go:123] Gathering logs for etcd [d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939] ...
	I0108 22:32:57.055010 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939"
	I0108 22:32:57.090164 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:57.102422 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:57.126662 1152828 logs.go:123] Gathering logs for coredns [6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8] ...
	I0108 22:32:57.126696 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8"
	I0108 22:32:57.177466 1152828 logs.go:123] Gathering logs for kube-proxy [324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89] ...
	I0108 22:32:57.177497 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89"
	I0108 22:32:57.224105 1152828 logs.go:123] Gathering logs for kube-controller-manager [f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa] ...
	I0108 22:32:57.224137 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa"
	I0108 22:32:57.258923 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:57.298130 1152828 logs.go:123] Gathering logs for dmesg ...
	I0108 22:32:57.298166 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:32:57.319454 1152828 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:32:57.319484 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:32:57.482114 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:57.501189 1152828 logs.go:123] Gathering logs for kube-scheduler [ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e] ...
	I0108 22:32:57.501220 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e"
	I0108 22:32:57.550838 1152828 logs.go:123] Gathering logs for kindnet [72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0] ...
	I0108 22:32:57.550867 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0"
	I0108 22:32:57.588813 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:57.599381 1152828 out.go:309] Setting ErrFile to fd 2...
	I0108 22:32:57.599443 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:32:57.599511 1152828 out.go:239] X Problems detected in kubelet:
	W0108 22:32:57.599554 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.477233    1355 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:32:57.599588 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:32:57.599636 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484343    1355 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:32:57.599674 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:32:57.599722 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484364    1355 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	I0108 22:32:57.599762 1152828 out.go:309] Setting ErrFile to fd 2...
	I0108 22:32:57.599794 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:32:57.601539 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:57.757366 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:57.983601 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:58.090754 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:58.115970 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:58.258937 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:58.483083 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:58.591112 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:58.600518 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:58.757754 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:58.984014 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:59.088747 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:59.102304 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:59.257879 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:59.482523 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:32:59.588382 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:32:59.598845 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:32:59.757505 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:32:59.982193 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:00.094721 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:00.123178 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:00.275488 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:00.483329 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:00.589518 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:00.602851 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:00.759027 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:00.982135 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:01.099449 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:01.129534 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:01.258225 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:01.482090 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:01.588677 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:01.599752 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:01.760099 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:02.014789 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:02.096915 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:02.104268 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:02.258842 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:02.482137 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:02.592014 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:02.601465 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:02.758207 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:02.984387 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:03.091692 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:03.106777 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:03.261969 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:03.484005 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:03.593642 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:03.608948 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:03.757564 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:03.982477 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:04.088988 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:04.101298 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:04.258011 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:04.481656 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:04.588347 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:04.600633 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:04.758216 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:04.982742 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:05.097964 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:05.115186 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:05.259118 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:05.482180 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:05.590667 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:05.601048 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:05.760832 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:05.983521 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:06.089992 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:06.099956 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:06.260502 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:06.483126 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:06.589949 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:06.599912 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:06.758025 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:06.983095 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:07.094142 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:07.104191 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:07.259634 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:07.482692 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:07.590128 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:07.599949 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:07.601167 1152828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:33:07.623278 1152828 api_server.go:72] duration metric: took 1m11.908284398s to wait for apiserver process to appear ...
	I0108 22:33:07.623304 1152828 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:33:07.623356 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:33:07.623433 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:33:07.682534 1152828 cri.go:89] found id: "e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460"
	I0108 22:33:07.682560 1152828 cri.go:89] found id: ""
	I0108 22:33:07.682568 1152828 logs.go:284] 1 containers: [e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460]
	I0108 22:33:07.682648 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:07.687323 1152828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:33:07.687419 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:33:07.735826 1152828 cri.go:89] found id: "d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939"
	I0108 22:33:07.735851 1152828 cri.go:89] found id: ""
	I0108 22:33:07.735860 1152828 logs.go:284] 1 containers: [d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939]
	I0108 22:33:07.735919 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:07.740875 1152828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:33:07.740946 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:33:07.757494 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:07.806681 1152828 cri.go:89] found id: "6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8"
	I0108 22:33:07.806752 1152828 cri.go:89] found id: ""
	I0108 22:33:07.806773 1152828 logs.go:284] 1 containers: [6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8]
	I0108 22:33:07.806853 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:07.811824 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:33:07.811943 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:33:07.869815 1152828 cri.go:89] found id: "ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e"
	I0108 22:33:07.869884 1152828 cri.go:89] found id: ""
	I0108 22:33:07.869910 1152828 logs.go:284] 1 containers: [ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e]
	I0108 22:33:07.869992 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:07.874767 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:33:07.874901 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:33:07.934028 1152828 cri.go:89] found id: "324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89"
	I0108 22:33:07.934087 1152828 cri.go:89] found id: ""
	I0108 22:33:07.934116 1152828 logs.go:284] 1 containers: [324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89]
	I0108 22:33:07.934197 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:07.938657 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:33:07.938772 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:33:07.982683 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:07.997709 1152828 cri.go:89] found id: "f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa"
	I0108 22:33:07.997780 1152828 cri.go:89] found id: ""
	I0108 22:33:07.997802 1152828 logs.go:284] 1 containers: [f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa]
	I0108 22:33:07.997884 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:08.002945 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:33:08.003109 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:33:08.066652 1152828 cri.go:89] found id: "72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0"
	I0108 22:33:08.066711 1152828 cri.go:89] found id: ""
	I0108 22:33:08.066740 1152828 logs.go:284] 1 containers: [72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0]
	I0108 22:33:08.066821 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:08.073883 1152828 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:33:08.073956 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:33:08.094530 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:08.102245 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:08.261289 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:08.302850 1152828 logs.go:123] Gathering logs for coredns [6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8] ...
	I0108 22:33:08.302887 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8"
	I0108 22:33:08.356482 1152828 logs.go:123] Gathering logs for kindnet [72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0] ...
	I0108 22:33:08.356513 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0"
	I0108 22:33:08.424870 1152828 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:33:08.424899 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:33:08.486913 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:08.545631 1152828 logs.go:123] Gathering logs for kubelet ...
	I0108 22:33:08.545703 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 22:33:08.589345 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.414213    1355 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-260832' and this object
	W0108 22:33:08.589594 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.414288    1355 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-260832' and this object
	W0108 22:33:08.592451 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.477202    1355 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:08.592674 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.477233    1355 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	I0108 22:33:08.595157 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:33:08.600411 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0108 22:33:08.600799 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:08.601048 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484343    1355 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:08.601249 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:08.601469 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484364    1355 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	I0108 22:33:08.646875 1152828 logs.go:123] Gathering logs for dmesg ...
	I0108 22:33:08.646945 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:33:08.671640 1152828 logs.go:123] Gathering logs for kube-apiserver [e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460] ...
	I0108 22:33:08.671708 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460"
	I0108 22:33:08.742726 1152828 logs.go:123] Gathering logs for etcd [d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939] ...
	I0108 22:33:08.742757 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939"
	I0108 22:33:08.757859 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:08.797897 1152828 logs.go:123] Gathering logs for kube-scheduler [ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e] ...
	I0108 22:33:08.800762 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e"
	I0108 22:33:08.889239 1152828 logs.go:123] Gathering logs for kube-proxy [324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89] ...
	I0108 22:33:08.889272 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89"
	I0108 22:33:08.971454 1152828 logs.go:123] Gathering logs for kube-controller-manager [f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa] ...
	I0108 22:33:08.971481 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa"
	I0108 22:33:08.987724 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:09.068062 1152828 logs.go:123] Gathering logs for container status ...
	I0108 22:33:09.068101 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:33:09.091807 1152828 kapi.go:107] duration metric: took 1m8.00826851s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 22:33:09.101403 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:09.144846 1152828 out.go:309] Setting ErrFile to fd 2...
	I0108 22:33:09.144878 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:33:09.145063 1152828 out.go:239] X Problems detected in kubelet:
	W0108 22:33:09.145082 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.477233    1355 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:09.145265 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:09.145277 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484343    1355 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:09.145292 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:09.145302 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484364    1355 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	I0108 22:33:09.145328 1152828 out.go:309] Setting ErrFile to fd 2...
	I0108 22:33:09.145343 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:33:09.257862 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:09.484675 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:09.599563 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:09.758055 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:09.996237 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:10.102150 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:10.258315 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:10.481947 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:10.604130 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:10.758205 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:10.989197 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:11.108712 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:11.258233 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:11.481868 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:11.599248 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:11.759059 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:11.995500 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:12.101169 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:12.257794 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:12.483118 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:12.600169 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:12.758573 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:12.999803 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:13.120900 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:13.258877 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:13.483027 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:13.599347 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:13.759557 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:13.982873 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:14.104063 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:14.257450 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:14.482982 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:14.600944 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:14.759521 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:14.982298 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:15.102673 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:15.257659 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:15.482321 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:15.600685 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:15.759860 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:15.982651 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:16.107269 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:16.258081 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:16.486747 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:16.600505 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:16.757782 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:16.982728 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:17.103639 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:17.257368 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:17.482265 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:17.600811 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:17.757797 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:17.983376 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:18.100896 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:18.257800 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:18.483494 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:18.610505 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:18.757102 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:18.982388 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:19.109673 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:19.146693 1152828 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 22:33:19.155809 1152828 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 22:33:19.157207 1152828 api_server.go:141] control plane version: v1.28.4
	I0108 22:33:19.157235 1152828 api_server.go:131] duration metric: took 11.533923987s to wait for apiserver health ...
	I0108 22:33:19.157245 1152828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:33:19.157280 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 22:33:19.157351 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 22:33:19.225025 1152828 cri.go:89] found id: "e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460"
	I0108 22:33:19.225046 1152828 cri.go:89] found id: ""
	I0108 22:33:19.225053 1152828 logs.go:284] 1 containers: [e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460]
	I0108 22:33:19.225107 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:19.232259 1152828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 22:33:19.232336 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 22:33:19.265770 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:19.300185 1152828 cri.go:89] found id: "d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939"
	I0108 22:33:19.300209 1152828 cri.go:89] found id: ""
	I0108 22:33:19.300216 1152828 logs.go:284] 1 containers: [d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939]
	I0108 22:33:19.300270 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:19.315413 1152828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 22:33:19.315483 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 22:33:19.384605 1152828 cri.go:89] found id: "6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8"
	I0108 22:33:19.384631 1152828 cri.go:89] found id: ""
	I0108 22:33:19.384639 1152828 logs.go:284] 1 containers: [6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8]
	I0108 22:33:19.384694 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:19.389353 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 22:33:19.389427 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 22:33:19.437107 1152828 cri.go:89] found id: "ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e"
	I0108 22:33:19.437131 1152828 cri.go:89] found id: ""
	I0108 22:33:19.437140 1152828 logs.go:284] 1 containers: [ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e]
	I0108 22:33:19.437209 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:19.441924 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 22:33:19.442014 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 22:33:19.484143 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:19.487632 1152828 cri.go:89] found id: "324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89"
	I0108 22:33:19.487655 1152828 cri.go:89] found id: ""
	I0108 22:33:19.487664 1152828 logs.go:284] 1 containers: [324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89]
	I0108 22:33:19.487719 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:19.492223 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 22:33:19.492293 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 22:33:19.538897 1152828 cri.go:89] found id: "f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa"
	I0108 22:33:19.538920 1152828 cri.go:89] found id: ""
	I0108 22:33:19.538928 1152828 logs.go:284] 1 containers: [f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa]
	I0108 22:33:19.539016 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:19.543706 1152828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 22:33:19.543779 1152828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 22:33:19.591475 1152828 cri.go:89] found id: "72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0"
	I0108 22:33:19.591497 1152828 cri.go:89] found id: ""
	I0108 22:33:19.591506 1152828 logs.go:284] 1 containers: [72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0]
	I0108 22:33:19.591579 1152828 ssh_runner.go:195] Run: which crictl
	I0108 22:33:19.596363 1152828 logs.go:123] Gathering logs for describe nodes ...
	I0108 22:33:19.596386 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 22:33:19.600629 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:33:19.741785 1152828 logs.go:123] Gathering logs for etcd [d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939] ...
	I0108 22:33:19.741817 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939"
	I0108 22:33:19.757561 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:19.792880 1152828 logs.go:123] Gathering logs for kindnet [72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0] ...
	I0108 22:33:19.792911 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0"
	I0108 22:33:19.836909 1152828 logs.go:123] Gathering logs for container status ...
	I0108 22:33:19.836937 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 22:33:19.908893 1152828 logs.go:123] Gathering logs for dmesg ...
	I0108 22:33:19.908924 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 22:33:19.931503 1152828 logs.go:123] Gathering logs for kube-apiserver [e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460] ...
	I0108 22:33:19.931531 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460"
	I0108 22:33:19.984310 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:20.014952 1152828 logs.go:123] Gathering logs for coredns [6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8] ...
	I0108 22:33:20.015001 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8"
	I0108 22:33:20.076428 1152828 logs.go:123] Gathering logs for kube-scheduler [ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e] ...
	I0108 22:33:20.076459 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e"
	I0108 22:33:20.102896 1152828 kapi.go:107] duration metric: took 1m18.009659225s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 22:33:20.142471 1152828 logs.go:123] Gathering logs for kube-proxy [324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89] ...
	I0108 22:33:20.142502 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89"
	I0108 22:33:20.191113 1152828 logs.go:123] Gathering logs for kube-controller-manager [f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa] ...
	I0108 22:33:20.191148 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa"
	I0108 22:33:20.258953 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:20.270088 1152828 logs.go:123] Gathering logs for CRI-O ...
	I0108 22:33:20.270122 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 22:33:20.364090 1152828 logs.go:123] Gathering logs for kubelet ...
	I0108 22:33:20.364128 1152828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 22:33:20.395886 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.414213    1355 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.396100 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.414288    1355 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.398960 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.477202    1355 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.399165 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.477233    1355 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.403399 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.403625 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484343    1355 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.403812 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.404016 1152828 logs.go:138] Found kubelet problem: Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484364    1355 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	I0108 22:33:20.455573 1152828 out.go:309] Setting ErrFile to fd 2...
	I0108 22:33:20.455610 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 22:33:20.455666 1152828 out.go:239] X Problems detected in kubelet:
	W0108 22:33:20.455681 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.477233    1355 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.455688 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.455700 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484343    1355 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-260832" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.455708 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: W0108 22:32:25.484295    1355 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	W0108 22:33:20.455719 1152828 out.go:239]   Jan 08 22:32:25 addons-260832 kubelet[1355]: E0108 22:32:25.484364    1355 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-260832" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-260832' and this object
	I0108 22:33:20.455726 1152828 out.go:309] Setting ErrFile to fd 2...
	I0108 22:33:20.455739 1152828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:33:20.482557 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:20.757915 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:20.982826 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:21.258137 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:21.481769 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:21.758087 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:21.981593 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:22.257218 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:22.481960 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:22.757371 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:22.981812 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:23.257917 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:23.481851 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:23.757270 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:23.982558 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:24.257132 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:24.482600 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:24.757143 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:24.982756 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:25.257923 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:25.482700 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:25.759211 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:25.982555 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:26.257572 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:26.481552 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:26.757152 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:26.981762 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:27.257428 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:27.481888 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:27.757119 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:27.982302 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:28.257976 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:28.482055 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:28.757806 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:28.981766 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:29.257604 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:29.483967 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:29.757313 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:29.982132 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:30.257845 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:30.467755 1152828 system_pods.go:59] 18 kube-system pods found
	I0108 22:33:30.467793 1152828 system_pods.go:61] "coredns-5dd5756b68-nqqkm" [0413e8da-1c0b-4c72-970d-a6b6fcabf1ba] Running
	I0108 22:33:30.467800 1152828 system_pods.go:61] "csi-hostpath-attacher-0" [3993d058-47f7-46fc-82b4-9621b83f8067] Running
	I0108 22:33:30.467805 1152828 system_pods.go:61] "csi-hostpath-resizer-0" [b05632b8-37d9-4090-a3f6-a745d32f7c1d] Running
	I0108 22:33:30.467811 1152828 system_pods.go:61] "csi-hostpathplugin-fpn25" [7421d9c3-8bfe-4f72-a9c9-4cc01e362481] Running
	I0108 22:33:30.467818 1152828 system_pods.go:61] "etcd-addons-260832" [d8a2cb31-51f8-45d7-881a-0183950ca22b] Running
	I0108 22:33:30.467823 1152828 system_pods.go:61] "kindnet-5qlfv" [d4c68383-29ec-4f46-ac3a-621ddd3081a8] Running
	I0108 22:33:30.467829 1152828 system_pods.go:61] "kube-apiserver-addons-260832" [7713da22-1619-4025-a991-8cb0c53e754d] Running
	I0108 22:33:30.467834 1152828 system_pods.go:61] "kube-controller-manager-addons-260832" [fb76684a-2503-48a5-b37b-120648eb080a] Running
	I0108 22:33:30.467843 1152828 system_pods.go:61] "kube-ingress-dns-minikube" [8b61e18d-b393-4b2a-af40-3eec3a5596c9] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 22:33:30.467855 1152828 system_pods.go:61] "kube-proxy-ts9nw" [7e33a586-2d14-45c3-9831-787086645646] Running
	I0108 22:33:30.467862 1152828 system_pods.go:61] "kube-scheduler-addons-260832" [3c9ebf7d-3844-4c3f-9234-c1d8dca4f67f] Running
	I0108 22:33:30.467868 1152828 system_pods.go:61] "metrics-server-7c66d45ddc-42csc" [03fc0110-e5c5-478c-b243-ee4881fde5ae] Running
	I0108 22:33:30.467877 1152828 system_pods.go:61] "nvidia-device-plugin-daemonset-ljg8m" [926d3b0d-c236-40b7-990f-4df2a22987bc] Running
	I0108 22:33:30.467881 1152828 system_pods.go:61] "registry-9b76r" [77da5921-b1b4-4033-a80b-87f834f8b970] Running
	I0108 22:33:30.467886 1152828 system_pods.go:61] "registry-proxy-fr2zl" [d9d359d0-f7ec-4cda-9d51-479545d9a406] Running
	I0108 22:33:30.467894 1152828 system_pods.go:61] "snapshot-controller-58dbcc7b99-5g855" [a38e40f1-08d3-4b22-bca0-71cc845b0721] Running
	I0108 22:33:30.467899 1152828 system_pods.go:61] "snapshot-controller-58dbcc7b99-hsmmj" [4eb413ea-8b59-4bc4-aec1-2a89dc56a7b3] Running
	I0108 22:33:30.467903 1152828 system_pods.go:61] "storage-provisioner" [87041c66-c853-48ee-8c89-991817b1d171] Running
	I0108 22:33:30.467909 1152828 system_pods.go:74] duration metric: took 11.31065874s to wait for pod list to return data ...
	I0108 22:33:30.467920 1152828 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:33:30.470339 1152828 default_sa.go:45] found service account: "default"
	I0108 22:33:30.470362 1152828 default_sa.go:55] duration metric: took 2.43648ms for default service account to be created ...
	I0108 22:33:30.470375 1152828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:33:30.483068 1152828 system_pods.go:86] 18 kube-system pods found
	I0108 22:33:30.483103 1152828 system_pods.go:89] "coredns-5dd5756b68-nqqkm" [0413e8da-1c0b-4c72-970d-a6b6fcabf1ba] Running
	I0108 22:33:30.483110 1152828 system_pods.go:89] "csi-hostpath-attacher-0" [3993d058-47f7-46fc-82b4-9621b83f8067] Running
	I0108 22:33:30.483116 1152828 system_pods.go:89] "csi-hostpath-resizer-0" [b05632b8-37d9-4090-a3f6-a745d32f7c1d] Running
	I0108 22:33:30.483121 1152828 system_pods.go:89] "csi-hostpathplugin-fpn25" [7421d9c3-8bfe-4f72-a9c9-4cc01e362481] Running
	I0108 22:33:30.483126 1152828 system_pods.go:89] "etcd-addons-260832" [d8a2cb31-51f8-45d7-881a-0183950ca22b] Running
	I0108 22:33:30.483131 1152828 system_pods.go:89] "kindnet-5qlfv" [d4c68383-29ec-4f46-ac3a-621ddd3081a8] Running
	I0108 22:33:30.483136 1152828 system_pods.go:89] "kube-apiserver-addons-260832" [7713da22-1619-4025-a991-8cb0c53e754d] Running
	I0108 22:33:30.483142 1152828 system_pods.go:89] "kube-controller-manager-addons-260832" [fb76684a-2503-48a5-b37b-120648eb080a] Running
	I0108 22:33:30.483151 1152828 system_pods.go:89] "kube-ingress-dns-minikube" [8b61e18d-b393-4b2a-af40-3eec3a5596c9] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 22:33:30.483158 1152828 system_pods.go:89] "kube-proxy-ts9nw" [7e33a586-2d14-45c3-9831-787086645646] Running
	I0108 22:33:30.483164 1152828 system_pods.go:89] "kube-scheduler-addons-260832" [3c9ebf7d-3844-4c3f-9234-c1d8dca4f67f] Running
	I0108 22:33:30.483169 1152828 system_pods.go:89] "metrics-server-7c66d45ddc-42csc" [03fc0110-e5c5-478c-b243-ee4881fde5ae] Running
	I0108 22:33:30.483175 1152828 system_pods.go:89] "nvidia-device-plugin-daemonset-ljg8m" [926d3b0d-c236-40b7-990f-4df2a22987bc] Running
	I0108 22:33:30.483180 1152828 system_pods.go:89] "registry-9b76r" [77da5921-b1b4-4033-a80b-87f834f8b970] Running
	I0108 22:33:30.483184 1152828 system_pods.go:89] "registry-proxy-fr2zl" [d9d359d0-f7ec-4cda-9d51-479545d9a406] Running
	I0108 22:33:30.483189 1152828 system_pods.go:89] "snapshot-controller-58dbcc7b99-5g855" [a38e40f1-08d3-4b22-bca0-71cc845b0721] Running
	I0108 22:33:30.483193 1152828 system_pods.go:89] "snapshot-controller-58dbcc7b99-hsmmj" [4eb413ea-8b59-4bc4-aec1-2a89dc56a7b3] Running
	I0108 22:33:30.483198 1152828 system_pods.go:89] "storage-provisioner" [87041c66-c853-48ee-8c89-991817b1d171] Running
	I0108 22:33:30.483204 1152828 system_pods.go:126] duration metric: took 12.824643ms to wait for k8s-apps to be running ...
	I0108 22:33:30.483211 1152828 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:33:30.483270 1152828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:33:30.489708 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:30.497888 1152828 system_svc.go:56] duration metric: took 14.664689ms WaitForService to wait for kubelet.
	I0108 22:33:30.497983 1152828 kubeadm.go:581] duration metric: took 1m34.782993487s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:33:30.498026 1152828 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:33:30.502381 1152828 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 22:33:30.502427 1152828 node_conditions.go:123] node cpu capacity is 2
	I0108 22:33:30.502465 1152828 node_conditions.go:105] duration metric: took 4.408619ms to run NodePressure ...
	I0108 22:33:30.502490 1152828 start.go:228] waiting for startup goroutines ...
	I0108 22:33:30.759845 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:30.982599 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:31.257696 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:31.482252 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:31.757466 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:31.982250 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:32.258020 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:32.482125 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:32.758026 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:32.982622 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:33.258802 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:33.485131 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:33.756847 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:33.982573 1152828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:33:34.258697 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:34.483356 1152828 kapi.go:107] duration metric: took 1m29.505168348s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 22:33:34.499447 1152828 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-260832 cluster.
	I0108 22:33:34.513680 1152828 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 22:33:34.526699 1152828 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 22:33:34.759732 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:35.268507 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:35.761940 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:36.259780 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:36.757601 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:37.258537 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:37.758705 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:38.257758 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:38.758189 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:39.258507 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:39.758036 1152828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:33:40.258174 1152828 kapi.go:107] duration metric: took 1m38.505454676s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 22:33:40.261033 1152828 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0108 22:33:40.263385 1152828 addons.go:508] enable addons completed in 1m45.121139808s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0108 22:33:40.263453 1152828 start.go:233] waiting for cluster config update ...
	I0108 22:33:40.263479 1152828 start.go:242] writing updated cluster config ...
	I0108 22:33:40.263821 1152828 ssh_runner.go:195] Run: rm -f paused
	I0108 22:33:40.746050 1152828 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:33:40.748596 1152828 out.go:177] * Done! kubectl is now configured to use "addons-260832" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.775615890Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8958de23-6b88-4dab-93c6-84aff0e89036 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.776595584Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e6df14be-6426-4d20-94ef-8813133d7c77 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.776856029Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e6df14be-6426-4d20-94ef-8813133d7c77 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.777744794Z" level=info msg="Creating container: default/hello-world-app-5d77478584-5ctxt/hello-world-app" id=3282c5b5-79dc-475e-923d-581cfaa451d4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.777828781Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.845954655Z" level=info msg="Created container fc78438f0892127983979dd00d90513b6b5112bcde6d9863bb247798960450ce: default/hello-world-app-5d77478584-5ctxt/hello-world-app" id=3282c5b5-79dc-475e-923d-581cfaa451d4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.848201352Z" level=info msg="Starting container: fc78438f0892127983979dd00d90513b6b5112bcde6d9863bb247798960450ce" id=0a5ed13b-8a7a-49ee-b2c5-fde5fa645b36 name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 22:37:46 addons-260832 conmon[8401]: conmon fc78438f089212798397 <ninfo>: container 8412 exited with status 1
	Jan 08 22:37:46 addons-260832 crio[891]: time="2024-01-08 22:37:46.859302257Z" level=info msg="Started container" PID=8412 containerID=fc78438f0892127983979dd00d90513b6b5112bcde6d9863bb247798960450ce description=default/hello-world-app-5d77478584-5ctxt/hello-world-app id=0a5ed13b-8a7a-49ee-b2c5-fde5fa645b36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b58f59f80b38121eac6665ffc63b86264db949c4a04c153be9732c6406a1276b
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.118144183Z" level=info msg="Removing container: 3593a61c6100d5df2e2d357aa6bf14d1e83e0cdac48c00589889dea4cb262be1" id=e4c726dc-2916-4531-b573-5e9b18e00bb5 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.142312783Z" level=info msg="Removed container 3593a61c6100d5df2e2d357aa6bf14d1e83e0cdac48c00589889dea4cb262be1: default/hello-world-app-5d77478584-5ctxt/hello-world-app" id=e4c726dc-2916-4531-b573-5e9b18e00bb5 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.146789462Z" level=warning msg="Stopping container 03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=29b531fc-dcf4-4ff1-8347-3da4025cc55b name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 22:37:47 addons-260832 conmon[5568]: conmon 03e2a02917e0d0f64af4 <ninfo>: container 5580 exited with status 137
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.290078783Z" level=info msg="Stopped container 03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318: ingress-nginx/ingress-nginx-controller-69cff4fd79-zkgxf/controller" id=29b531fc-dcf4-4ff1-8347-3da4025cc55b name=/runtime.v1.RuntimeService/StopContainer
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.290695698Z" level=info msg="Stopping pod sandbox: 722a2e1b074968e6cefe44da45d582b5b60c422ec0774f2d3617e53d5fa38c91" id=7e4c1631-2248-47ed-9853-f98b5db1087c name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.294761598Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-N7YB2WDFE3B6QGLU - [0:0]\n:KUBE-HP-OZRVNL4XB6DAN6OO - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-N7YB2WDFE3B6QGLU\n-X KUBE-HP-OZRVNL4XB6DAN6OO\nCOMMIT\n"
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.296346474Z" level=info msg="Closing host port tcp:80"
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.296397345Z" level=info msg="Closing host port tcp:443"
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.298212841Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.298245866Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.298416901Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-zkgxf Namespace:ingress-nginx ID:722a2e1b074968e6cefe44da45d582b5b60c422ec0774f2d3617e53d5fa38c91 UID:f402e168-9bd8-4283-8bf6-e28fdd39b2e5 NetNS:/var/run/netns/8956b3ac-c6c1-4a2d-83b1-654ca200d65a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.298554639Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-zkgxf from CNI network \"kindnet\" (type=ptp)"
	Jan 08 22:37:47 addons-260832 crio[891]: time="2024-01-08 22:37:47.326713267Z" level=info msg="Stopped pod sandbox: 722a2e1b074968e6cefe44da45d582b5b60c422ec0774f2d3617e53d5fa38c91" id=7e4c1631-2248-47ed-9853-f98b5db1087c name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 22:37:48 addons-260832 crio[891]: time="2024-01-08 22:37:48.123585829Z" level=info msg="Removing container: 03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318" id=f18f7537-50b2-4505-b723-d63b258d7d42 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:37:48 addons-260832 crio[891]: time="2024-01-08 22:37:48.139347248Z" level=info msg="Removed container 03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318: ingress-nginx/ingress-nginx-controller-69cff4fd79-zkgxf/controller" id=f18f7537-50b2-4505-b723-d63b258d7d42 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc78438f08921       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             5 seconds ago       Exited              hello-world-app           2                   b58f59f80b381       hello-world-app-5d77478584-5ctxt
	3f6196b477c0f       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                              2 minutes ago       Running             nginx                     0                   806086b661eee       nginx
	7ce9d466902e3       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        4 minutes ago       Running             headlamp                  0                   7cabc544eae09       headlamp-7ddfbb94ff-sbxl9
	2eab78800901d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 4 minutes ago       Running             gcp-auth                  0                   0d34003e586c9       gcp-auth-d4c87556c-sptpx
	0afcde3ff8ff5       af594c6a879f2e441ea446a122296abbbe11aae5547e780f2582fbcda5df271c                                                             4 minutes ago       Exited              patch                     3                   7d169c2a184d6       ingress-nginx-admission-patch-k7d6c
	e24db26fefda0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago       Exited              create                    0                   dccc77a364a97       ingress-nginx-admission-create-g4t6s
	44b383c549112       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   4d8d0b963a31d       yakd-dashboard-9947fc6bf-s447v
	6e212c5bf1454       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             5 minutes ago       Running             coredns                   0                   95e153d79d67e       coredns-5dd5756b68-nqqkm
	205edd15a6a72       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   03f090cb63d5f       storage-provisioner
	324d199235d33       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago       Running             kube-proxy                0                   15f9bdf8f9ab1       kube-proxy-ts9nw
	72bfaaf0c7efe       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   68e323721db7b       kindnet-5qlfv
	ffb88ee9137cd       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             6 minutes ago       Running             kube-scheduler            0                   2708c1037dc6e       kube-scheduler-addons-260832
	f2989689755fa       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             6 minutes ago       Running             kube-controller-manager   0                   9050ceb18ffc8       kube-controller-manager-addons-260832
	e802a7408651b       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             6 minutes ago       Running             kube-apiserver            0                   a0bc9ae3f4cfe       kube-apiserver-addons-260832
	d47d5ae36a307       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             6 minutes ago       Running             etcd                      0                   40e6c35e81ea2       etcd-addons-260832
	
	
	==> coredns [6e212c5bf1454f8e37f1f9a4399c741d9c596f5641c32e21d5ea41f58e786ff8] <==
	[INFO] 10.244.0.19:56689 - 14609 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057427s
	[INFO] 10.244.0.19:56689 - 29533 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000234107s
	[INFO] 10.244.0.19:41699 - 35593 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001824891s
	[INFO] 10.244.0.19:56689 - 11824 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001328713s
	[INFO] 10.244.0.19:41699 - 61613 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115979s
	[INFO] 10.244.0.19:56689 - 1018 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002047018s
	[INFO] 10.244.0.19:56689 - 62189 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067766s
	[INFO] 10.244.0.19:51967 - 4512 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000124823s
	[INFO] 10.244.0.19:51967 - 31463 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083651s
	[INFO] 10.244.0.19:51967 - 41492 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057468s
	[INFO] 10.244.0.19:51967 - 65023 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059824s
	[INFO] 10.244.0.19:51967 - 56408 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056262s
	[INFO] 10.244.0.19:51967 - 49866 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056894s
	[INFO] 10.244.0.19:51967 - 13476 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0021681s
	[INFO] 10.244.0.19:39868 - 25260 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061808s
	[INFO] 10.244.0.19:39868 - 46677 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000154928s
	[INFO] 10.244.0.19:51967 - 14655 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001101711s
	[INFO] 10.244.0.19:39868 - 39014 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000188774s
	[INFO] 10.244.0.19:51967 - 63840 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090018s
	[INFO] 10.244.0.19:39868 - 34516 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048639s
	[INFO] 10.244.0.19:39868 - 62840 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070637s
	[INFO] 10.244.0.19:39868 - 13483 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054211s
	[INFO] 10.244.0.19:39868 - 19899 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001248016s
	[INFO] 10.244.0.19:39868 - 9359 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000989195s
	[INFO] 10.244.0.19:39868 - 64590 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056902s
	
	
	==> describe nodes <==
	Name:               addons-260832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-260832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=addons-260832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_31_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-260832
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:31:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-260832
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:37:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:37:49 +0000   Mon, 08 Jan 2024 22:31:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:37:49 +0000   Mon, 08 Jan 2024 22:31:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:37:49 +0000   Mon, 08 Jan 2024 22:31:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:37:49 +0000   Mon, 08 Jan 2024 22:32:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-260832
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 cac3e87dc1484c2490368a2d19b9c03b
	  System UUID:                6566b15d-9802-49ef-9c75-c11c050fab60
	  Boot ID:                    cf8959e1-1119-4140-86a9-5e54dd11ba57
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-5ctxt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  gcp-auth                    gcp-auth-d4c87556c-sptpx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  headlamp                    headlamp-7ddfbb94ff-sbxl9                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 coredns-5dd5756b68-nqqkm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m58s
	  kube-system                 etcd-addons-260832                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m10s
	  kube-system                 kindnet-5qlfv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m59s
	  kube-system                 kube-apiserver-addons-260832             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-controller-manager-addons-260832    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-proxy-ts9nw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-scheduler-addons-260832             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-s447v           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node addons-260832 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node addons-260832 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x8 over 6m18s)  kubelet          Node addons-260832 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m11s                  kubelet          Node addons-260832 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s                  kubelet          Node addons-260832 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s                  kubelet          Node addons-260832 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m59s                  node-controller  Node addons-260832 event: Registered Node addons-260832 in Controller
	  Normal  NodeReady                5m27s                  kubelet          Node addons-260832 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001204] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000777] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000c1ba5b74
	[  +0.001149] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +0.002446] FS-Cache: Duplicate cookie detected
	[  +0.000760] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001121] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=000000006a7d9f12
	[  +0.001191] FS-Cache: O-key=[8] '503e5c0100000000'
	[  +0.000772] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001063] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000ee2dae65
	[  +0.001177] FS-Cache: N-key=[8] '503e5c0100000000'
	[  +2.707981] FS-Cache: Duplicate cookie detected
	[  +0.000811] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001060] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=000000009c53aac4
	[  +0.001118] FS-Cache: O-key=[8] '4f3e5c0100000000'
	[  +0.000792] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=000000006704ace3
	[  +0.001159] FS-Cache: N-key=[8] '4f3e5c0100000000'
	[  +0.447043] FS-Cache: Duplicate cookie detected
	[  +0.000791] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001090] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001210] FS-Cache: O-key=[8] '553e5c0100000000'
	[  +0.000828] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001058] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000c1ba5b74
	[  +0.001155] FS-Cache: N-key=[8] '553e5c0100000000'
	
	
	==> etcd [d47d5ae36a307154e187f826b4478e71457058a1b6644b112815a2d4b6e52939] <==
	{"level":"info","ts":"2024-01-08T22:31:35.799063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T22:31:35.799128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-08T22:31:35.799171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T22:31:35.799207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T22:31:35.799247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-08T22:31:35.799282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-08T22:31:35.801115Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:31:35.805219Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-260832 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:31:35.805387Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:31:35.806385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-08T22:31:35.813066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:31:35.813543Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:31:35.81361Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:31:35.813403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:31:35.814581Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:31:35.82091Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:31:35.821071Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:31:56.392629Z","caller":"traceutil/trace.go:171","msg":"trace[1909141663] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"105.674454ms","start":"2024-01-08T22:31:56.286936Z","end":"2024-01-08T22:31:56.392611Z","steps":["trace[1909141663] 'process raft request'  (duration: 68.528572ms)","trace[1909141663] 'compare'  (duration: 37.061756ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:31:56.833039Z","caller":"traceutil/trace.go:171","msg":"trace[233756875] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"120.303714ms","start":"2024-01-08T22:31:56.712714Z","end":"2024-01-08T22:31:56.833018Z","steps":["trace[233756875] 'process raft request'  (duration: 92.600639ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:31:58.82941Z","caller":"traceutil/trace.go:171","msg":"trace[425607975] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"107.873832ms","start":"2024-01-08T22:31:58.721518Z","end":"2024-01-08T22:31:58.829392Z","steps":["trace[425607975] 'process raft request'  (duration: 49.549458ms)","trace[425607975] 'compare'  (duration: 50.7207ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:31:58.837624Z","caller":"traceutil/trace.go:171","msg":"trace[1221425976] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:432; }","duration":"115.777615ms","start":"2024-01-08T22:31:58.721832Z","end":"2024-01-08T22:31:58.83761Z","steps":["trace[1221425976] 'read index received'  (duration: 49.189559ms)","trace[1221425976] 'applied index is now lower than readState.Index'  (duration: 66.585619ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T22:31:58.837691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.869971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:31:58.882977Z","caller":"traceutil/trace.go:171","msg":"trace[509592549] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:425; }","duration":"161.158469ms","start":"2024-01-08T22:31:58.7218Z","end":"2024-01-08T22:31:58.882959Z","steps":["trace[509592549] 'agreement among raft nodes before linearized reading'  (duration: 115.853158ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:32:26.389366Z","caller":"traceutil/trace.go:171","msg":"trace[915358124] transaction","detail":"{read_only:false; response_revision:900; number_of_response:1; }","duration":"114.830979ms","start":"2024-01-08T22:32:26.274492Z","end":"2024-01-08T22:32:26.389323Z","steps":["trace[915358124] 'process raft request'  (duration: 114.793113ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:32:26.392083Z","caller":"traceutil/trace.go:171","msg":"trace[2122070053] transaction","detail":"{read_only:false; response_revision:899; number_of_response:1; }","duration":"117.929895ms","start":"2024-01-08T22:32:26.274137Z","end":"2024-01-08T22:32:26.392066Z","steps":["trace[2122070053] 'process raft request'  (duration: 107.082387ms)"],"step_count":1}
	
	
	==> gcp-auth [2eab78800901d632ddd42ac8024c53d2ada9296c662c519aa0361fbbc9469b3c] <==
	2024/01/08 22:33:33 GCP Auth Webhook started!
	2024/01/08 22:33:42 Ready to marshal response ...
	2024/01/08 22:33:42 Ready to write response ...
	2024/01/08 22:33:42 Ready to marshal response ...
	2024/01/08 22:33:42 Ready to write response ...
	2024/01/08 22:33:42 Ready to marshal response ...
	2024/01/08 22:33:42 Ready to write response ...
	2024/01/08 22:33:52 Ready to marshal response ...
	2024/01/08 22:33:52 Ready to write response ...
	2024/01/08 22:34:01 Ready to marshal response ...
	2024/01/08 22:34:01 Ready to write response ...
	2024/01/08 22:34:01 Ready to marshal response ...
	2024/01/08 22:34:01 Ready to write response ...
	2024/01/08 22:34:11 Ready to marshal response ...
	2024/01/08 22:34:11 Ready to write response ...
	2024/01/08 22:34:15 Ready to marshal response ...
	2024/01/08 22:34:15 Ready to write response ...
	2024/01/08 22:34:48 Ready to marshal response ...
	2024/01/08 22:34:48 Ready to write response ...
	2024/01/08 22:35:07 Ready to marshal response ...
	2024/01/08 22:35:07 Ready to write response ...
	2024/01/08 22:37:26 Ready to marshal response ...
	2024/01/08 22:37:26 Ready to write response ...
	
	
	==> kernel <==
	 22:37:52 up  5:20,  0 users,  load average: 0.68, 1.57, 2.28
	Linux addons-260832 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [72bfaaf0c7efef966efe1752ba86ba3694d401db0852c327f12a3105f5829bb0] <==
	I0108 22:35:45.462209       1 main.go:227] handling current node
	I0108 22:35:55.466696       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:35:55.466727       1 main.go:227] handling current node
	I0108 22:36:05.478087       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:36:05.478114       1 main.go:227] handling current node
	I0108 22:36:15.490867       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:36:15.490897       1 main.go:227] handling current node
	I0108 22:36:25.502149       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:36:25.502179       1 main.go:227] handling current node
	I0108 22:36:35.507642       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:36:35.507675       1 main.go:227] handling current node
	I0108 22:36:45.517216       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:36:45.517249       1 main.go:227] handling current node
	I0108 22:36:55.521746       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:36:55.521772       1 main.go:227] handling current node
	I0108 22:37:05.525761       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:37:05.525790       1 main.go:227] handling current node
	I0108 22:37:15.529914       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:37:15.529949       1 main.go:227] handling current node
	I0108 22:37:25.540733       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:37:25.540762       1 main.go:227] handling current node
	I0108 22:37:35.545391       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:37:35.545418       1 main.go:227] handling current node
	I0108 22:37:45.556677       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:37:45.556706       1 main.go:227] handling current node
	
	
	==> kube-apiserver [e802a7408651b29256b4d5283835ee3be30b8e72e437c879b32c632640445460] <==
	I0108 22:35:01.337290       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0108 22:35:02.371962       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0108 22:35:03.531294       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.531455       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:35:03.544741       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.544911       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:35:03.560502       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.560685       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:35:03.570614       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.570783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:35:03.577055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.577206       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:35:03.586094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.586148       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:35:03.605654       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.605749       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 22:35:03.613140       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 22:35:03.613184       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0108 22:35:04.571379       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 22:35:04.614243       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 22:35:04.624597       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 22:35:06.924904       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0108 22:35:07.260393       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.42.249"}
	I0108 22:35:56.953208       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0108 22:37:27.211281       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.28.45"}
	
	
	==> kube-controller-manager [f2989689755fa0d7859cafe8b638e53b7be00478102e4202f8bc1571329771aa] <==
	E0108 22:36:50.878542       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:36:59.813322       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:36:59.813354       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:37:09.812738       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:37:09.812771       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:37:18.936207       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:37:18.936242       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 22:37:26.922920       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 22:37:26.964388       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-5ctxt"
	I0108 22:37:26.986225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.513187ms"
	I0108 22:37:27.028753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.414416ms"
	I0108 22:37:27.029042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.035µs"
	I0108 22:37:30.095458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.157µs"
	I0108 22:37:31.107158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.51µs"
	I0108 22:37:32.096638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.119µs"
	W0108 22:37:37.086779       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:37:37.086811       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 22:37:38.659227       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:37:38.659261       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 22:37:44.119543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.818µs"
	I0108 22:37:44.125755       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 22:37:44.140343       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0108 22:37:45.547703       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 22:37:45.547734       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 22:37:47.136296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="91.413µs"
	
	
	==> kube-proxy [324d199235d3307af5eeeeb4550ad4d6c7982a7a4df1da18aea19205ca0d8b89] <==
	I0108 22:31:54.993450       1 server_others.go:69] "Using iptables proxy"
	I0108 22:31:55.021186       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0108 22:31:55.063950       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 22:31:55.066679       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:31:55.066791       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 22:31:55.066826       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 22:31:55.066912       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:31:55.067182       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:31:55.067378       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:31:55.068286       1 config.go:188] "Starting service config controller"
	I0108 22:31:55.068384       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:31:55.068430       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:31:55.068459       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:31:55.071425       1 config.go:315] "Starting node config controller"
	I0108 22:31:55.072511       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:31:55.168656       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:31:55.168733       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:31:55.175198       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ffb88ee9137cd9292e4c99bee8d2d051f13b3dddca5b991433c77c8ad648763e] <==
	W0108 22:31:39.336932       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:31:39.336972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:31:39.337115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:31:39.337158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:31:39.337269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:31:39.337308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:31:39.337430       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:31:39.337463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:31:39.337585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:31:39.337604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:31:39.337645       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:31:39.337659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:31:39.337718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:31:39.337774       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:31:39.337798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:31:39.337853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:31:39.337894       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:31:39.337927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:31:39.337937       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:31:39.337994       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 22:31:39.337723       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:31:39.338055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:31:39.337837       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:31:39.338123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0108 22:31:40.728259       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 22:37:41 addons-260832 kubelet[1355]: E0108 22:37:41.989191    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4eb4f69c4e3edf8a9db553d93cd878c11087fef1d78672e91a96d8be40c5daef/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4eb4f69c4e3edf8a9db553d93cd878c11087fef1d78672e91a96d8be40c5daef/diff: no such file or directory, extraDiskErr: <nil>
	Jan 08 22:37:41 addons-260832 kubelet[1355]: E0108 22:37:41.989307    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fca204b002649b3dcb8cfb9fe30a406de31899078144b43566b276db911e4314/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fca204b002649b3dcb8cfb9fe30a406de31899078144b43566b276db911e4314/diff: no such file or directory, extraDiskErr: <nil>
	Jan 08 22:37:41 addons-260832 kubelet[1355]: E0108 22:37:41.990467    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2d9e1e4df3c86d3de712ed867ed3584dc2c2105f413dcc80a2948e95657e03b4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2d9e1e4df3c86d3de712ed867ed3584dc2c2105f413dcc80a2948e95657e03b4/diff: no such file or directory, extraDiskErr: <nil>
	Jan 08 22:37:43 addons-260832 kubelet[1355]: I0108 22:37:43.104091    1355 scope.go:117] "RemoveContainer" containerID="c7bb260714024958d61af4bff1df9fe11ed3daf6a3a4953227f61db236045339"
	Jan 08 22:37:43 addons-260832 kubelet[1355]: I0108 22:37:43.174132    1355 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p568s\" (UniqueName: \"kubernetes.io/projected/8b61e18d-b393-4b2a-af40-3eec3a5596c9-kube-api-access-p568s\") pod \"8b61e18d-b393-4b2a-af40-3eec3a5596c9\" (UID: \"8b61e18d-b393-4b2a-af40-3eec3a5596c9\") "
	Jan 08 22:37:43 addons-260832 kubelet[1355]: I0108 22:37:43.177185    1355 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b61e18d-b393-4b2a-af40-3eec3a5596c9-kube-api-access-p568s" (OuterVolumeSpecName: "kube-api-access-p568s") pod "8b61e18d-b393-4b2a-af40-3eec3a5596c9" (UID: "8b61e18d-b393-4b2a-af40-3eec3a5596c9"). InnerVolumeSpecName "kube-api-access-p568s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:37:43 addons-260832 kubelet[1355]: I0108 22:37:43.275409    1355 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p568s\" (UniqueName: \"kubernetes.io/projected/8b61e18d-b393-4b2a-af40-3eec3a5596c9-kube-api-access-p568s\") on node \"addons-260832\" DevicePath \"\""
	Jan 08 22:37:43 addons-260832 kubelet[1355]: I0108 22:37:43.775069    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8b61e18d-b393-4b2a-af40-3eec3a5596c9" path="/var/lib/kubelet/pods/8b61e18d-b393-4b2a-af40-3eec3a5596c9/volumes"
	Jan 08 22:37:45 addons-260832 kubelet[1355]: I0108 22:37:45.775938    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8844c220-15c0-41d0-8f12-03ca9cceb29e" path="/var/lib/kubelet/pods/8844c220-15c0-41d0-8f12-03ca9cceb29e/volumes"
	Jan 08 22:37:45 addons-260832 kubelet[1355]: I0108 22:37:45.776350    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a62340ff-8487-4355-84cd-eab37ed64616" path="/var/lib/kubelet/pods/a62340ff-8487-4355-84cd-eab37ed64616/volumes"
	Jan 08 22:37:46 addons-260832 kubelet[1355]: I0108 22:37:46.774750    1355 scope.go:117] "RemoveContainer" containerID="3593a61c6100d5df2e2d357aa6bf14d1e83e0cdac48c00589889dea4cb262be1"
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.116315    1355 scope.go:117] "RemoveContainer" containerID="3593a61c6100d5df2e2d357aa6bf14d1e83e0cdac48c00589889dea4cb262be1"
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.116588    1355 scope.go:117] "RemoveContainer" containerID="fc78438f0892127983979dd00d90513b6b5112bcde6d9863bb247798960450ce"
	Jan 08 22:37:47 addons-260832 kubelet[1355]: E0108 22:37:47.116838    1355 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-5ctxt_default(08a96c77-c7e5-4689-bc02-eac668f04938)\"" pod="default/hello-world-app-5d77478584-5ctxt" podUID="08a96c77-c7e5-4689-bc02-eac668f04938"
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.407853    1355 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f402e168-9bd8-4283-8bf6-e28fdd39b2e5-webhook-cert\") pod \"f402e168-9bd8-4283-8bf6-e28fdd39b2e5\" (UID: \"f402e168-9bd8-4283-8bf6-e28fdd39b2e5\") "
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.407923    1355 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqlqz\" (UniqueName: \"kubernetes.io/projected/f402e168-9bd8-4283-8bf6-e28fdd39b2e5-kube-api-access-jqlqz\") pod \"f402e168-9bd8-4283-8bf6-e28fdd39b2e5\" (UID: \"f402e168-9bd8-4283-8bf6-e28fdd39b2e5\") "
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.410298    1355 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f402e168-9bd8-4283-8bf6-e28fdd39b2e5-kube-api-access-jqlqz" (OuterVolumeSpecName: "kube-api-access-jqlqz") pod "f402e168-9bd8-4283-8bf6-e28fdd39b2e5" (UID: "f402e168-9bd8-4283-8bf6-e28fdd39b2e5"). InnerVolumeSpecName "kube-api-access-jqlqz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.410869    1355 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e168-9bd8-4283-8bf6-e28fdd39b2e5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f402e168-9bd8-4283-8bf6-e28fdd39b2e5" (UID: "f402e168-9bd8-4283-8bf6-e28fdd39b2e5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.508357    1355 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f402e168-9bd8-4283-8bf6-e28fdd39b2e5-webhook-cert\") on node \"addons-260832\" DevicePath \"\""
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.508396    1355 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jqlqz\" (UniqueName: \"kubernetes.io/projected/f402e168-9bd8-4283-8bf6-e28fdd39b2e5-kube-api-access-jqlqz\") on node \"addons-260832\" DevicePath \"\""
	Jan 08 22:37:47 addons-260832 kubelet[1355]: I0108 22:37:47.775532    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f402e168-9bd8-4283-8bf6-e28fdd39b2e5" path="/var/lib/kubelet/pods/f402e168-9bd8-4283-8bf6-e28fdd39b2e5/volumes"
	Jan 08 22:37:48 addons-260832 kubelet[1355]: I0108 22:37:48.121733    1355 scope.go:117] "RemoveContainer" containerID="03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318"
	Jan 08 22:37:48 addons-260832 kubelet[1355]: I0108 22:37:48.139626    1355 scope.go:117] "RemoveContainer" containerID="03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318"
	Jan 08 22:37:48 addons-260832 kubelet[1355]: E0108 22:37:48.140044    1355 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318\": container with ID starting with 03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318 not found: ID does not exist" containerID="03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318"
	Jan 08 22:37:48 addons-260832 kubelet[1355]: I0108 22:37:48.140099    1355 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318"} err="failed to get container status \"03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318\": rpc error: code = NotFound desc = could not find container \"03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318\": container with ID starting with 03e2a02917e0d0f64af42922e98d5c95610f5857d8a82a7a3e7242e808ddd318 not found: ID does not exist"
	
	
	==> storage-provisioner [205edd15a6a7298e1267643d710433a3c59c8b68010989c0de151418d7b8c52b] <==
	I0108 22:32:26.545084       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:32:26.650005       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:32:26.650574       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:32:26.681821       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:32:26.688223       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-260832_3308519c-63b2-4559-9ab9-34ea226079e7!
	I0108 22:32:26.701777       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b80a407b-9e78-458f-bd67-bbb9a76e5e1a", APIVersion:"v1", ResourceVersion:"913", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-260832_3308519c-63b2-4559-9ab9-34ea226079e7 became leader
	I0108 22:32:26.788568       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-260832_3308519c-63b2-4559-9ab9-34ea226079e7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-260832 -n addons-260832
helpers_test.go:261: (dbg) Run:  kubectl --context addons-260832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (167.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-332576 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-332576 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.264169151s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-332576 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-332576 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c2dd59b5-1bd9-4d7a-84a4-a6b34cae5d7a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c2dd59b5-1bd9-4d7a-84a4-a6b34cae5d7a] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.002949132s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-332576 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 22:46:55.943421 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:55.948714 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:55.959039 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:55.979296 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:56.019662 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:56.100000 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:56.260390 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:56.580953 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:57.221834 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:46:58.502319 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:47:01.063385 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:47:06.184068 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:47:16.424256 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-332576 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.130486233s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-332576 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-332576 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0108 22:47:36.904526 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.019775028s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-332576 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-332576 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-332576 addons disable ingress --alsologtostderr -v=1: (7.576981279s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-332576
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-332576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dad8aaa911c72a6a067f4902ec2f50c754148b31267ac61d7c34747e0e078c7d",
	        "Created": "2024-01-08T22:43:34.646684501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1180112,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T22:43:35.007526391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3167e60a71dbae425a4b9caa3fc8f52cf3c3b5035be6746ce0af2b692a3018d8",
	        "ResolvConfPath": "/var/lib/docker/containers/dad8aaa911c72a6a067f4902ec2f50c754148b31267ac61d7c34747e0e078c7d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dad8aaa911c72a6a067f4902ec2f50c754148b31267ac61d7c34747e0e078c7d/hostname",
	        "HostsPath": "/var/lib/docker/containers/dad8aaa911c72a6a067f4902ec2f50c754148b31267ac61d7c34747e0e078c7d/hosts",
	        "LogPath": "/var/lib/docker/containers/dad8aaa911c72a6a067f4902ec2f50c754148b31267ac61d7c34747e0e078c7d/dad8aaa911c72a6a067f4902ec2f50c754148b31267ac61d7c34747e0e078c7d-json.log",
	        "Name": "/ingress-addon-legacy-332576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-332576:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-332576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f2b5b4e3248564a451016151c2a378925cdf49f0cebe9e83f32af315083df838-init/diff:/var/lib/docker/overlay2/38e0010c12bf0b8a699570be0a9e49c2514b24d0012b6438a157027e46de7e51/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2b5b4e3248564a451016151c2a378925cdf49f0cebe9e83f32af315083df838/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2b5b4e3248564a451016151c2a378925cdf49f0cebe9e83f32af315083df838/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2b5b4e3248564a451016151c2a378925cdf49f0cebe9e83f32af315083df838/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-332576",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-332576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-332576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-332576",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-332576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf097538c11c15543ca8016ea84650c0c8b9de4d955f2965a6843256a9700e85",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34047"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34044"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34046"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34045"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cf097538c11c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-332576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dad8aaa911c7",
	                        "ingress-addon-legacy-332576"
	                    ],
	                    "NetworkID": "22b2c3c493b759f3c5207e77561d1786ebf3d47a412cd5ec7f6c8c8bbbc8d55d",
	                    "EndpointID": "9e844070fc42c84e1ae5359fa280251aaa24c9392c2a6bfceb0d266fc308df1f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-332576 -n ingress-addon-legacy-332576
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-332576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-332576 logs -n 25: (1.435686149s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-037488 image load --daemon                                  | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-037488               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488 image ls                                             | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	| image   | functional-037488 image load --daemon                                  | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-037488               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488 image ls                                             | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	| image   | functional-037488 image save                                           | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-037488               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488 image rm                                             | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-037488               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488 image ls                                             | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	| image   | functional-037488 image load                                           | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488 image ls                                             | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	| image   | functional-037488 image save --daemon                                  | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-037488               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488                                                      | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488                                                      | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-037488 ssh pgrep                                            | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-037488                                                      | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488 image build -t                                       | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:43 UTC |
	|         | localhost/my-image:functional-037488                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-037488                                                      | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:42 UTC | 08 Jan 24 22:42 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-037488 image ls                                             | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:43 UTC | 08 Jan 24 22:43 UTC |
	| delete  | -p functional-037488                                                   | functional-037488           | jenkins | v1.32.0 | 08 Jan 24 22:43 UTC | 08 Jan 24 22:43 UTC |
	| start   | -p ingress-addon-legacy-332576                                         | ingress-addon-legacy-332576 | jenkins | v1.32.0 | 08 Jan 24 22:43 UTC | 08 Jan 24 22:44 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-332576                                            | ingress-addon-legacy-332576 | jenkins | v1.32.0 | 08 Jan 24 22:44 UTC | 08 Jan 24 22:44 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-332576                                            | ingress-addon-legacy-332576 | jenkins | v1.32.0 | 08 Jan 24 22:44 UTC | 08 Jan 24 22:44 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-332576                                            | ingress-addon-legacy-332576 | jenkins | v1.32.0 | 08 Jan 24 22:45 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-332576 ip                                         | ingress-addon-legacy-332576 | jenkins | v1.32.0 | 08 Jan 24 22:47 UTC | 08 Jan 24 22:47 UTC |
	| addons  | ingress-addon-legacy-332576                                            | ingress-addon-legacy-332576 | jenkins | v1.32.0 | 08 Jan 24 22:47 UTC | 08 Jan 24 22:47 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-332576                                            | ingress-addon-legacy-332576 | jenkins | v1.32.0 | 08 Jan 24 22:47 UTC | 08 Jan 24 22:47 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:43:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:43:04.824108 1179664 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:43:04.824300 1179664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:43:04.824326 1179664 out.go:309] Setting ErrFile to fd 2...
	I0108 22:43:04.824347 1179664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:43:04.824625 1179664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 22:43:04.825120 1179664 out.go:303] Setting JSON to false
	I0108 22:43:04.826027 1179664 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19525,"bootTime":1704734260,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:43:04.826122 1179664 start.go:138] virtualization:  
	I0108 22:43:04.829383 1179664 out.go:177] * [ingress-addon-legacy-332576] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 22:43:04.832230 1179664 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:43:04.834451 1179664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:43:04.832338 1179664 notify.go:220] Checking for updates...
	I0108 22:43:04.838309 1179664 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:43:04.840455 1179664 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:43:04.842796 1179664 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 22:43:04.845186 1179664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:43:04.847463 1179664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:43:04.880151 1179664 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:43:04.880278 1179664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:43:04.989767 1179664 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 22:43:04.979955962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:43:04.989880 1179664 docker.go:295] overlay module found
	I0108 22:43:04.992264 1179664 out.go:177] * Using the docker driver based on user configuration
	I0108 22:43:04.994889 1179664 start.go:298] selected driver: docker
	I0108 22:43:04.994905 1179664 start.go:902] validating driver "docker" against <nil>
	I0108 22:43:04.994918 1179664 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:43:04.995510 1179664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:43:05.078982 1179664 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-08 22:43:05.068745908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:43:05.079160 1179664 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 22:43:05.079456 1179664 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:43:05.081613 1179664 out.go:177] * Using Docker driver with root privileges
	I0108 22:43:05.084022 1179664 cni.go:84] Creating CNI manager for ""
	I0108 22:43:05.084052 1179664 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:43:05.084066 1179664 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 22:43:05.084079 1179664 start_flags.go:321] config:
	{Name:ingress-addon-legacy-332576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-332576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:43:05.087239 1179664 out.go:177] * Starting control plane node ingress-addon-legacy-332576 in cluster ingress-addon-legacy-332576
	I0108 22:43:05.089373 1179664 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:43:05.092161 1179664 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 22:43:05.095822 1179664 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 22:43:05.095863 1179664 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 22:43:05.115042 1179664 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 22:43:05.115079 1179664 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	I0108 22:43:05.165041 1179664 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0108 22:43:05.165077 1179664 cache.go:56] Caching tarball of preloaded images
	I0108 22:43:05.165255 1179664 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 22:43:05.167757 1179664 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 22:43:05.169719 1179664 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:43:05.286320 1179664 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0108 22:43:26.606369 1179664 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:43:26.606477 1179664 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:43:27.800313 1179664 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 22:43:27.800697 1179664 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/config.json ...
	I0108 22:43:27.800732 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/config.json: {Name:mk129fa76aacf9102820ef3b8f4092f2989db214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:27.800913 1179664 cache.go:194] Successfully downloaded all kic artifacts
	I0108 22:43:27.800976 1179664 start.go:365] acquiring machines lock for ingress-addon-legacy-332576: {Name:mk0dcd5c9c3bf824ebd22c9ccfe5a859ddb6faf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:43:27.801059 1179664 start.go:369] acquired machines lock for "ingress-addon-legacy-332576" in 68.677µs
	I0108 22:43:27.801089 1179664 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-332576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-332576 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:43:27.801161 1179664 start.go:125] createHost starting for "" (driver="docker")
	I0108 22:43:27.804248 1179664 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0108 22:43:27.804496 1179664 start.go:159] libmachine.API.Create for "ingress-addon-legacy-332576" (driver="docker")
	I0108 22:43:27.804523 1179664 client.go:168] LocalClient.Create starting
	I0108 22:43:27.804617 1179664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem
	I0108 22:43:27.804654 1179664 main.go:141] libmachine: Decoding PEM data...
	I0108 22:43:27.804673 1179664 main.go:141] libmachine: Parsing certificate...
	I0108 22:43:27.804743 1179664 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem
	I0108 22:43:27.804769 1179664 main.go:141] libmachine: Decoding PEM data...
	I0108 22:43:27.804784 1179664 main.go:141] libmachine: Parsing certificate...
	I0108 22:43:27.805160 1179664 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-332576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 22:43:27.822969 1179664 cli_runner.go:211] docker network inspect ingress-addon-legacy-332576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 22:43:27.823048 1179664 network_create.go:281] running [docker network inspect ingress-addon-legacy-332576] to gather additional debugging logs...
	I0108 22:43:27.823065 1179664 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-332576
	W0108 22:43:27.840591 1179664 cli_runner.go:211] docker network inspect ingress-addon-legacy-332576 returned with exit code 1
	I0108 22:43:27.840628 1179664 network_create.go:284] error running [docker network inspect ingress-addon-legacy-332576]: docker network inspect ingress-addon-legacy-332576: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-332576 not found
	I0108 22:43:27.840642 1179664 network_create.go:286] output of [docker network inspect ingress-addon-legacy-332576]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-332576 not found
	
	** /stderr **
	I0108 22:43:27.840762 1179664 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:43:27.859605 1179664 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004780e0}
	I0108 22:43:27.859643 1179664 network_create.go:124] attempt to create docker network ingress-addon-legacy-332576 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0108 22:43:27.859702 1179664 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-332576 ingress-addon-legacy-332576
	I0108 22:43:27.939293 1179664 network_create.go:108] docker network ingress-addon-legacy-332576 192.168.49.0/24 created
	I0108 22:43:27.939325 1179664 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-332576" container
	I0108 22:43:27.939405 1179664 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 22:43:27.956537 1179664 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-332576 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-332576 --label created_by.minikube.sigs.k8s.io=true
	I0108 22:43:27.978299 1179664 oci.go:103] Successfully created a docker volume ingress-addon-legacy-332576
	I0108 22:43:27.978400 1179664 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-332576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-332576 --entrypoint /usr/bin/test -v ingress-addon-legacy-332576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 22:43:29.573201 1179664 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-332576-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-332576 --entrypoint /usr/bin/test -v ingress-addon-legacy-332576:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib: (1.594753738s)
	I0108 22:43:29.573235 1179664 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-332576
	I0108 22:43:29.573255 1179664 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 22:43:29.573276 1179664 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 22:43:29.573373 1179664 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-332576:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 22:43:34.562608 1179664 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-332576:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (4.989190639s)
	I0108 22:43:34.562642 1179664 kic.go:203] duration metric: took 4.989363 seconds to extract preloaded images to volume
	W0108 22:43:34.562786 1179664 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 22:43:34.562901 1179664 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 22:43:34.630290 1179664 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-332576 --name ingress-addon-legacy-332576 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-332576 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-332576 --network ingress-addon-legacy-332576 --ip 192.168.49.2 --volume ingress-addon-legacy-332576:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 22:43:35.017780 1179664 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-332576 --format={{.State.Running}}
	I0108 22:43:35.046001 1179664 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-332576 --format={{.State.Status}}
	I0108 22:43:35.081319 1179664 cli_runner.go:164] Run: docker exec ingress-addon-legacy-332576 stat /var/lib/dpkg/alternatives/iptables
	I0108 22:43:35.162765 1179664 oci.go:144] the created container "ingress-addon-legacy-332576" has a running status.
	I0108 22:43:35.162798 1179664 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa...
	I0108 22:43:35.897898 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 22:43:35.897986 1179664 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 22:43:35.922930 1179664 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-332576 --format={{.State.Status}}
	I0108 22:43:35.961884 1179664 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 22:43:35.961902 1179664 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-332576 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 22:43:36.059376 1179664 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-332576 --format={{.State.Status}}
	I0108 22:43:36.085439 1179664 machine.go:88] provisioning docker machine ...
	I0108 22:43:36.085482 1179664 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-332576"
	I0108 22:43:36.085556 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:36.106997 1179664 main.go:141] libmachine: Using SSH client type: native
	I0108 22:43:36.108431 1179664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34048 <nil> <nil>}
	I0108 22:43:36.108457 1179664 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-332576 && echo "ingress-addon-legacy-332576" | sudo tee /etc/hostname
	I0108 22:43:36.280856 1179664 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-332576
	
	I0108 22:43:36.281052 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:36.307673 1179664 main.go:141] libmachine: Using SSH client type: native
	I0108 22:43:36.308086 1179664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34048 <nil> <nil>}
	I0108 22:43:36.308106 1179664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-332576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-332576/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-332576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:43:36.450326 1179664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:43:36.450409 1179664 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 22:43:36.450441 1179664 ubuntu.go:177] setting up certificates
	I0108 22:43:36.450478 1179664 provision.go:83] configureAuth start
	I0108 22:43:36.450571 1179664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-332576
	I0108 22:43:36.468227 1179664 provision.go:138] copyHostCerts
	I0108 22:43:36.468275 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 22:43:36.468307 1179664 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem, removing ...
	I0108 22:43:36.468313 1179664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 22:43:36.468388 1179664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 22:43:36.468475 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 22:43:36.468493 1179664 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem, removing ...
	I0108 22:43:36.468497 1179664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 22:43:36.468523 1179664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 22:43:36.468562 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 22:43:36.468576 1179664 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem, removing ...
	I0108 22:43:36.468579 1179664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 22:43:36.468601 1179664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 22:43:36.468647 1179664 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-332576 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-332576]
	I0108 22:43:36.966503 1179664 provision.go:172] copyRemoteCerts
	I0108 22:43:36.966600 1179664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:43:36.966647 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:36.988782 1179664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa Username:docker}
	I0108 22:43:37.100032 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 22:43:37.100112 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:43:37.130357 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 22:43:37.130429 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0108 22:43:37.160641 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 22:43:37.160719 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:43:37.190101 1179664 provision.go:86] duration metric: configureAuth took 739.58441ms
	I0108 22:43:37.190133 1179664 ubuntu.go:193] setting minikube options for container-runtime
	I0108 22:43:37.190339 1179664 config.go:182] Loaded profile config "ingress-addon-legacy-332576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 22:43:37.190450 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:37.208182 1179664 main.go:141] libmachine: Using SSH client type: native
	I0108 22:43:37.208619 1179664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34048 <nil> <nil>}
	I0108 22:43:37.208645 1179664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:43:37.480835 1179664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:43:37.480905 1179664 machine.go:91] provisioned docker machine in 1.395445711s
	I0108 22:43:37.480920 1179664 client.go:171] LocalClient.Create took 9.676391486s
	I0108 22:43:37.480935 1179664 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-332576" took 9.676439502s
	I0108 22:43:37.480943 1179664 start.go:300] post-start starting for "ingress-addon-legacy-332576" (driver="docker")
	I0108 22:43:37.480983 1179664 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:43:37.481136 1179664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:43:37.481186 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:37.499325 1179664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa Username:docker}
	I0108 22:43:37.596243 1179664 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:43:37.600344 1179664 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 22:43:37.600383 1179664 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 22:43:37.600412 1179664 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 22:43:37.600424 1179664 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 22:43:37.600435 1179664 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 22:43:37.600516 1179664 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 22:43:37.600599 1179664 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> 11522512.pem in /etc/ssl/certs
	I0108 22:43:37.600609 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> /etc/ssl/certs/11522512.pem
	I0108 22:43:37.600724 1179664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:43:37.611387 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 22:43:37.641344 1179664 start.go:303] post-start completed in 160.370836ms
	I0108 22:43:37.641792 1179664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-332576
	I0108 22:43:37.659634 1179664 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/config.json ...
	I0108 22:43:37.659916 1179664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 22:43:37.659968 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:37.678030 1179664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa Username:docker}
	I0108 22:43:37.771147 1179664 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 22:43:37.777134 1179664 start.go:128] duration metric: createHost completed in 9.975956836s
	I0108 22:43:37.777158 1179664 start.go:83] releasing machines lock for "ingress-addon-legacy-332576", held for 9.976081569s
	I0108 22:43:37.777230 1179664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-332576
	I0108 22:43:37.795544 1179664 ssh_runner.go:195] Run: cat /version.json
	I0108 22:43:37.795607 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:37.795658 1179664 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:43:37.795735 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:43:37.822992 1179664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa Username:docker}
	I0108 22:43:37.823687 1179664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa Username:docker}
	I0108 22:43:38.050050 1179664 ssh_runner.go:195] Run: systemctl --version
	I0108 22:43:38.056312 1179664 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:43:38.205533 1179664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 22:43:38.211260 1179664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:43:38.237500 1179664 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 22:43:38.237588 1179664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:43:38.274087 1179664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 22:43:38.274165 1179664 start.go:475] detecting cgroup driver to use...
	I0108 22:43:38.274206 1179664 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 22:43:38.274264 1179664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:43:38.292452 1179664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:43:38.306130 1179664 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:43:38.306195 1179664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:43:38.325196 1179664 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:43:38.342389 1179664 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:43:38.456581 1179664 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:43:38.568350 1179664 docker.go:219] disabling docker service ...
	I0108 22:43:38.568424 1179664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:43:38.591107 1179664 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:43:38.605912 1179664 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:43:38.711094 1179664 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:43:38.823878 1179664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:43:38.838496 1179664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:43:38.858514 1179664 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 22:43:38.858592 1179664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:43:38.870399 1179664 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:43:38.870532 1179664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:43:38.882989 1179664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:43:38.896193 1179664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:43:38.908845 1179664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:43:38.920413 1179664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:43:38.931257 1179664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:43:38.941645 1179664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:43:39.045781 1179664 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:43:39.179192 1179664 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:43:39.179264 1179664 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:43:39.184212 1179664 start.go:543] Will wait 60s for crictl version
	I0108 22:43:39.184286 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:39.189177 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:43:39.233984 1179664 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 22:43:39.234069 1179664 ssh_runner.go:195] Run: crio --version
	I0108 22:43:39.279154 1179664 ssh_runner.go:195] Run: crio --version
	I0108 22:43:39.328304 1179664 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0108 22:43:39.330672 1179664 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-332576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:43:39.348339 1179664 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0108 22:43:39.353366 1179664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:43:39.367513 1179664 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 22:43:39.367585 1179664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:43:39.419787 1179664 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 22:43:39.419868 1179664 ssh_runner.go:195] Run: which lz4
	I0108 22:43:39.424419 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0108 22:43:39.424514 1179664 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:43:39.428948 1179664 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:43:39.428989 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0108 22:43:41.597827 1179664 crio.go:444] Took 2.173346 seconds to copy over tarball
	I0108 22:43:41.597929 1179664 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:43:44.379380 1179664 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.781418403s)
	I0108 22:43:44.379406 1179664 crio.go:451] Took 2.781554 seconds to extract the tarball
	I0108 22:43:44.379416 1179664 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:43:44.466066 1179664 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:43:44.510925 1179664 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 22:43:44.510952 1179664 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:43:44.511002 1179664 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:43:44.511190 1179664 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 22:43:44.511277 1179664 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 22:43:44.511363 1179664 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 22:43:44.511430 1179664 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 22:43:44.511499 1179664 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 22:43:44.511571 1179664 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 22:43:44.511640 1179664 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 22:43:44.513368 1179664 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 22:43:44.513378 1179664 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 22:43:44.513438 1179664 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 22:43:44.513477 1179664 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:43:44.513739 1179664 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 22:43:44.513803 1179664 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 22:43:44.513979 1179664 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 22:43:44.514110 1179664 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0108 22:43:44.830775 1179664 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0108 22:43:44.841433 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0108 22:43:44.851891 1179664 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 22:43:44.852148 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 22:43:44.866907 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0108 22:43:44.872437 1179664 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 22:43:44.872691 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0108 22:43:44.895174 1179664 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0108 22:43:44.895365 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0108 22:43:44.904134 1179664 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 22:43:44.904311 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0108 22:43:44.908972 1179664 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0108 22:43:44.909160 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 22:43:44.931598 1179664 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0108 22:43:44.931714 1179664 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 22:43:44.931791 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:44.976549 1179664 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0108 22:43:44.976590 1179664 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 22:43:44.976639 1179664 ssh_runner.go:195] Run: which crictl
	W0108 22:43:45.052167 1179664 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0108 22:43:45.052452 1179664 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:43:45.065911 1179664 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0108 22:43:45.065997 1179664 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0108 22:43:45.066038 1179664 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 22:43:45.066038 1179664 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 22:43:45.066109 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:45.065937 1179664 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0108 22:43:45.066181 1179664 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 22:43:45.066216 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:45.066260 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:45.107771 1179664 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0108 22:43:45.107820 1179664 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 22:43:45.107875 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:45.107972 1179664 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0108 22:43:45.107993 1179664 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 22:43:45.108021 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:45.108113 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 22:43:45.108200 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 22:43:45.271642 1179664 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0108 22:43:45.271743 1179664 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:43:45.271779 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 22:43:45.271867 1179664 ssh_runner.go:195] Run: which crictl
	I0108 22:43:45.271918 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 22:43:45.272006 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 22:43:45.272141 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 22:43:45.272204 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 22:43:45.272237 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0108 22:43:45.272291 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 22:43:45.454511 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 22:43:45.454547 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 22:43:45.454621 1179664 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:43:45.454679 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0108 22:43:45.454746 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0108 22:43:45.454801 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 22:43:45.511503 1179664 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0108 22:43:45.511587 1179664 cache_images.go:92] LoadImages completed in 1.000621569s
	W0108 22:43:45.511664 1179664 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0108 22:43:45.511744 1179664 ssh_runner.go:195] Run: crio config
	I0108 22:43:45.568336 1179664 cni.go:84] Creating CNI manager for ""
	I0108 22:43:45.568359 1179664 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:43:45.568414 1179664 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:43:45.568441 1179664 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-332576 NodeName:ingress-addon-legacy-332576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 22:43:45.568602 1179664 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-332576"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:43:45.568680 1179664 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-332576 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-332576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:43:45.568749 1179664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 22:43:45.579725 1179664 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:43:45.579829 1179664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:43:45.590783 1179664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0108 22:43:45.613299 1179664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 22:43:45.635503 1179664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0108 22:43:45.657885 1179664 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0108 22:43:45.662580 1179664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:43:45.675992 1179664 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576 for IP: 192.168.49.2
	I0108 22:43:45.676038 1179664 certs.go:190] acquiring lock for shared ca certs: {Name:mk2f5e9ada40477437d91c2ac8d6b62bb5d1e97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:45.676208 1179664 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key
	I0108 22:43:45.676267 1179664 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key
	I0108 22:43:45.676330 1179664 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.key
	I0108 22:43:45.676343 1179664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt with IP's: []
	I0108 22:43:46.230306 1179664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt ...
	I0108 22:43:46.230339 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: {Name:mk59caa25f594d824b1b17202ea06709af55d459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:46.230547 1179664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.key ...
	I0108 22:43:46.230562 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.key: {Name:mk3173e8d1640d65c8544d0d8b83a183802eb896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:46.230648 1179664 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.key.dd3b5fb2
	I0108 22:43:46.230663 1179664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:43:46.959122 1179664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.crt.dd3b5fb2 ...
	I0108 22:43:46.959155 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.crt.dd3b5fb2: {Name:mkbd727c66d872ce3928bb7e3d39f8ededdffadd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:46.959335 1179664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.key.dd3b5fb2 ...
	I0108 22:43:46.959353 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.key.dd3b5fb2: {Name:mk4fdd5fd5a1d3dd58a0b1777988f53ca9aa4a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:46.959439 1179664 certs.go:337] copying /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.crt
	I0108 22:43:46.959515 1179664 certs.go:341] copying /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.key
	I0108 22:43:46.959574 1179664 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.key
	I0108 22:43:46.959590 1179664 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.crt with IP's: []
	I0108 22:43:47.305237 1179664 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.crt ...
	I0108 22:43:47.305270 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.crt: {Name:mk9fa7490510bb864cee4132dd33834cb76bf25d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:47.305464 1179664 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.key ...
	I0108 22:43:47.305481 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.key: {Name:mka1cb57eac08eada3c2f8f395a8fb1abb7023de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:43:47.305575 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 22:43:47.305598 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 22:43:47.305622 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 22:43:47.305653 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 22:43:47.305670 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 22:43:47.305685 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 22:43:47.305701 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 22:43:47.305717 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 22:43:47.305779 1179664 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem (1338 bytes)
	W0108 22:43:47.305822 1179664 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251_empty.pem, impossibly tiny 0 bytes
	I0108 22:43:47.305837 1179664 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:43:47.305868 1179664 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:43:47.305902 1179664 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:43:47.305931 1179664 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem (1675 bytes)
	I0108 22:43:47.305986 1179664 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 22:43:47.306017 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> /usr/share/ca-certificates/11522512.pem
	I0108 22:43:47.306040 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:43:47.306056 1179664 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem -> /usr/share/ca-certificates/1152251.pem
	I0108 22:43:47.306671 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:43:47.337643 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:43:47.367125 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:43:47.396865 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 22:43:47.426115 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:43:47.455841 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:43:47.484551 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:43:47.514280 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:43:47.543676 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /usr/share/ca-certificates/11522512.pem (1708 bytes)
	I0108 22:43:47.573943 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:43:47.604272 1179664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem --> /usr/share/ca-certificates/1152251.pem (1338 bytes)
	I0108 22:43:47.633906 1179664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:43:47.656267 1179664 ssh_runner.go:195] Run: openssl version
	I0108 22:43:47.663402 1179664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11522512.pem && ln -fs /usr/share/ca-certificates/11522512.pem /etc/ssl/certs/11522512.pem"
	I0108 22:43:47.675500 1179664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11522512.pem
	I0108 22:43:47.680238 1179664 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 22:39 /usr/share/ca-certificates/11522512.pem
	I0108 22:43:47.680367 1179664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11522512.pem
	I0108 22:43:47.689274 1179664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11522512.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:43:47.701200 1179664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:43:47.712749 1179664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:43:47.717468 1179664 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:43:47.717577 1179664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:43:47.725856 1179664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:43:47.737099 1179664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1152251.pem && ln -fs /usr/share/ca-certificates/1152251.pem /etc/ssl/certs/1152251.pem"
	I0108 22:43:47.748584 1179664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1152251.pem
	I0108 22:43:47.753657 1179664 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 22:39 /usr/share/ca-certificates/1152251.pem
	I0108 22:43:47.753783 1179664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1152251.pem
	I0108 22:43:47.762364 1179664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1152251.pem /etc/ssl/certs/51391683.0"
	I0108 22:43:47.773771 1179664 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:43:47.778275 1179664 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:43:47.778326 1179664 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-332576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-332576 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:43:47.778398 1179664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:43:47.778458 1179664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:43:47.819709 1179664 cri.go:89] found id: ""
	I0108 22:43:47.819782 1179664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:43:47.830671 1179664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:43:47.841707 1179664 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 22:43:47.841828 1179664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:43:47.852583 1179664 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:43:47.852628 1179664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 22:43:47.910631 1179664 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 22:43:47.911102 1179664 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:43:47.963532 1179664 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 22:43:47.963613 1179664 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 22:43:47.963651 1179664 kubeadm.go:322] OS: Linux
	I0108 22:43:47.963712 1179664 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 22:43:47.963771 1179664 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 22:43:47.963840 1179664 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 22:43:47.963890 1179664 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 22:43:47.963949 1179664 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 22:43:47.964006 1179664 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 22:43:48.061488 1179664 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:43:48.061602 1179664 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:43:48.061704 1179664 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:43:48.301358 1179664 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:43:48.302811 1179664 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:43:48.302876 1179664 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:43:48.413441 1179664 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:43:48.417219 1179664 out.go:204]   - Generating certificates and keys ...
	I0108 22:43:48.417485 1179664 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:43:48.417616 1179664 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:43:48.976877 1179664 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:43:49.232143 1179664 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:43:49.493365 1179664 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:43:49.774975 1179664 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:43:50.021220 1179664 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:43:50.021365 1179664 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-332576 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 22:43:50.289189 1179664 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:43:50.289589 1179664 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-332576 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0108 22:43:51.010370 1179664 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:43:51.536985 1179664 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:43:52.070863 1179664 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:43:52.071251 1179664 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:43:52.387712 1179664 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:43:53.222755 1179664 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:43:53.452812 1179664 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:43:53.794966 1179664 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:43:53.795914 1179664 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:43:53.799799 1179664 out.go:204]   - Booting up control plane ...
	I0108 22:43:53.799902 1179664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:43:53.807820 1179664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:43:53.810832 1179664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:43:53.812163 1179664 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:43:53.815361 1179664 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:44:06.318247 1179664 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502761 seconds
	I0108 22:44:06.318363 1179664 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:44:06.332097 1179664 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:44:06.854886 1179664 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:44:06.855033 1179664 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-332576 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 22:44:07.364172 1179664 kubeadm.go:322] [bootstrap-token] Using token: tiuv35.c6e2c21x6wolcpjr
	I0108 22:44:07.366677 1179664 out.go:204]   - Configuring RBAC rules ...
	I0108 22:44:07.366812 1179664 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:44:07.371023 1179664 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:44:07.378200 1179664 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:44:07.381060 1179664 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:44:07.385208 1179664 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:44:07.388108 1179664 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:44:07.397144 1179664 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:44:07.712596 1179664 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:44:07.791673 1179664 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:44:07.792865 1179664 kubeadm.go:322] 
	I0108 22:44:07.792934 1179664 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:44:07.792940 1179664 kubeadm.go:322] 
	I0108 22:44:07.793029 1179664 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:44:07.793035 1179664 kubeadm.go:322] 
	I0108 22:44:07.793059 1179664 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:44:07.794927 1179664 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:44:07.794985 1179664 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:44:07.794991 1179664 kubeadm.go:322] 
	I0108 22:44:07.795041 1179664 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:44:07.795112 1179664 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:44:07.795177 1179664 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:44:07.795182 1179664 kubeadm.go:322] 
	I0108 22:44:07.795261 1179664 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:44:07.795339 1179664 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:44:07.795345 1179664 kubeadm.go:322] 
	I0108 22:44:07.795426 1179664 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tiuv35.c6e2c21x6wolcpjr \
	I0108 22:44:07.795526 1179664 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 \
	I0108 22:44:07.795549 1179664 kubeadm.go:322]     --control-plane 
	I0108 22:44:07.795569 1179664 kubeadm.go:322] 
	I0108 22:44:07.795651 1179664 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:44:07.795655 1179664 kubeadm.go:322] 
	I0108 22:44:07.795732 1179664 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tiuv35.c6e2c21x6wolcpjr \
	I0108 22:44:07.796557 1179664 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 
	I0108 22:44:07.796957 1179664 kubeadm.go:322] W0108 22:43:47.909789    1230 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 22:44:07.797194 1179664 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 22:44:07.797293 1179664 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:44:07.797413 1179664 kubeadm.go:322] W0108 22:43:53.807442    1230 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 22:44:07.797532 1179664 kubeadm.go:322] W0108 22:43:53.810931    1230 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 22:44:07.797548 1179664 cni.go:84] Creating CNI manager for ""
	I0108 22:44:07.797555 1179664 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:44:07.800410 1179664 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 22:44:07.802872 1179664 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 22:44:07.808345 1179664 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0108 22:44:07.808370 1179664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 22:44:07.832845 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 22:44:08.329773 1179664 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:44:08.329882 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:08.329930 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=ingress-addon-legacy-332576 minikube.k8s.io/updated_at=2024_01_08T22_44_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:08.499364 1179664 ops.go:34] apiserver oom_adj: -16
	I0108 22:44:08.499452 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:08.999895 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:09.500124 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:10.000067 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:10.500268 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:11.002803 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:11.499810 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:11.999642 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:12.500237 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:13.000377 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:13.500221 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:14.003241 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:14.499607 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:14.999728 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:15.500080 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:16.000229 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:16.500466 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:16.999585 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:17.499626 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:18.000366 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:18.499664 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:18.999526 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:19.500506 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:20.000619 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:20.500357 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:20.999912 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:21.499596 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:21.999792 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:22.500280 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:22.999641 1179664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:44:23.129949 1179664 kubeadm.go:1088] duration metric: took 14.800174264s to wait for elevateKubeSystemPrivileges.
	I0108 22:44:23.129994 1179664 kubeadm.go:406] StartCluster complete in 35.351670913s
	I0108 22:44:23.130014 1179664 settings.go:142] acquiring lock: {Name:mk4ee991c68e71724ae577ac1a9a811b1b4e899c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:44:23.130075 1179664 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:44:23.130846 1179664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/kubeconfig: {Name:mk4903c0deda408cf5380ebed8399fb64deac655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:44:23.131591 1179664 kapi.go:59] client config for ingress-addon-legacy-332576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:44:23.132766 1179664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:44:23.133424 1179664 config.go:182] Loaded profile config "ingress-addon-legacy-332576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 22:44:23.133474 1179664 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:44:23.133542 1179664 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-332576"
	I0108 22:44:23.133560 1179664 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-332576"
	I0108 22:44:23.133615 1179664 host.go:66] Checking if "ingress-addon-legacy-332576" exists ...
	I0108 22:44:23.134068 1179664 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-332576 --format={{.State.Status}}
	I0108 22:44:23.134339 1179664 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 22:44:23.134376 1179664 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-332576"
	I0108 22:44:23.134391 1179664 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-332576"
	I0108 22:44:23.134658 1179664 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-332576 --format={{.State.Status}}
	I0108 22:44:23.179518 1179664 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:44:23.181564 1179664 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:44:23.181583 1179664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:44:23.181657 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:44:23.212136 1179664 kapi.go:59] client config for ingress-addon-legacy-332576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:44:23.212449 1179664 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-332576"
	I0108 22:44:23.212495 1179664 host.go:66] Checking if "ingress-addon-legacy-332576" exists ...
	I0108 22:44:23.212946 1179664 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-332576 --format={{.State.Status}}
	I0108 22:44:23.224988 1179664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa Username:docker}
	I0108 22:44:23.259382 1179664 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:44:23.259409 1179664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:44:23.259474 1179664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-332576
	I0108 22:44:23.285530 1179664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34048 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/ingress-addon-legacy-332576/id_rsa Username:docker}
	I0108 22:44:23.398933 1179664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:44:23.420666 1179664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:44:23.504341 1179664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:44:23.651262 1179664 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-332576" context rescaled to 1 replicas
	I0108 22:44:23.651353 1179664 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:44:23.654206 1179664 out.go:177] * Verifying Kubernetes components...
	I0108 22:44:23.656201 1179664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:44:23.998768 1179664 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0108 22:44:24.267672 1179664 kapi.go:59] client config for ingress-addon-legacy-332576: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:44:24.268009 1179664 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-332576" to be "Ready" ...
	I0108 22:44:24.286148 1179664 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 22:44:24.287774 1179664 addons.go:508] enable addons completed in 1.154291863s: enabled=[storage-provisioner default-storageclass]
	I0108 22:44:26.271804 1179664 node_ready.go:58] node "ingress-addon-legacy-332576" has status "Ready":"False"
	I0108 22:44:28.771604 1179664 node_ready.go:58] node "ingress-addon-legacy-332576" has status "Ready":"False"
	I0108 22:44:30.771802 1179664 node_ready.go:58] node "ingress-addon-legacy-332576" has status "Ready":"False"
	I0108 22:44:31.270864 1179664 node_ready.go:49] node "ingress-addon-legacy-332576" has status "Ready":"True"
	I0108 22:44:31.270887 1179664 node_ready.go:38] duration metric: took 7.002832998s waiting for node "ingress-addon-legacy-332576" to be "Ready" ...
	I0108 22:44:31.270898 1179664 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:44:31.280759 1179664 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-dtv5t" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:33.284187 1179664 pod_ready.go:102] pod "coredns-66bff467f8-dtv5t" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-08 22:44:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0108 22:44:35.284673 1179664 pod_ready.go:102] pod "coredns-66bff467f8-dtv5t" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-08 22:44:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0108 22:44:37.286558 1179664 pod_ready.go:102] pod "coredns-66bff467f8-dtv5t" in "kube-system" namespace has status "Ready":"False"
	I0108 22:44:37.786649 1179664 pod_ready.go:92] pod "coredns-66bff467f8-dtv5t" in "kube-system" namespace has status "Ready":"True"
	I0108 22:44:37.786677 1179664 pod_ready.go:81] duration metric: took 6.505834671s waiting for pod "coredns-66bff467f8-dtv5t" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.786688 1179664 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.791800 1179664 pod_ready.go:92] pod "etcd-ingress-addon-legacy-332576" in "kube-system" namespace has status "Ready":"True"
	I0108 22:44:37.791825 1179664 pod_ready.go:81] duration metric: took 5.129754ms waiting for pod "etcd-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.791875 1179664 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.796708 1179664 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-332576" in "kube-system" namespace has status "Ready":"True"
	I0108 22:44:37.796738 1179664 pod_ready.go:81] duration metric: took 4.844767ms waiting for pod "kube-apiserver-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.796752 1179664 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.802237 1179664 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-332576" in "kube-system" namespace has status "Ready":"True"
	I0108 22:44:37.802264 1179664 pod_ready.go:81] duration metric: took 5.503273ms waiting for pod "kube-controller-manager-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.802276 1179664 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzcc8" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.807058 1179664 pod_ready.go:92] pod "kube-proxy-pzcc8" in "kube-system" namespace has status "Ready":"True"
	I0108 22:44:37.807086 1179664 pod_ready.go:81] duration metric: took 4.802379ms waiting for pod "kube-proxy-pzcc8" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.807099 1179664 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:37.981509 1179664 request.go:629] Waited for 174.270377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-332576
	I0108 22:44:38.181357 1179664 request.go:629] Waited for 195.125482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-332576
	I0108 22:44:38.184775 1179664 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-332576" in "kube-system" namespace has status "Ready":"True"
	I0108 22:44:38.184847 1179664 pod_ready.go:81] duration metric: took 377.73817ms waiting for pod "kube-scheduler-ingress-addon-legacy-332576" in "kube-system" namespace to be "Ready" ...
	I0108 22:44:38.184876 1179664 pod_ready.go:38] duration metric: took 6.913960631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:44:38.184916 1179664 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:44:38.185036 1179664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:44:38.212296 1179664 api_server.go:72] duration metric: took 14.560883432s to wait for apiserver process to appear ...
	I0108 22:44:38.212378 1179664 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:44:38.212417 1179664 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0108 22:44:38.221510 1179664 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0108 22:44:38.222645 1179664 api_server.go:141] control plane version: v1.18.20
	I0108 22:44:38.222666 1179664 api_server.go:131] duration metric: took 10.26841ms to wait for apiserver health ...
	I0108 22:44:38.222675 1179664 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:44:38.382039 1179664 request.go:629] Waited for 159.296587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:44:38.388481 1179664 system_pods.go:59] 8 kube-system pods found
	I0108 22:44:38.388565 1179664 system_pods.go:61] "coredns-66bff467f8-dtv5t" [abbdd40a-322d-427d-b244-97878e50a5dc] Running
	I0108 22:44:38.388577 1179664 system_pods.go:61] "etcd-ingress-addon-legacy-332576" [5778e50f-56a5-4060-916c-17e99641aa79] Running
	I0108 22:44:38.388583 1179664 system_pods.go:61] "kindnet-gwhc8" [fd417944-02f5-4bfb-bb47-4c0f4f1e0c30] Running
	I0108 22:44:38.388588 1179664 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-332576" [cb861063-0649-4c49-9a68-efa89ed98504] Running
	I0108 22:44:38.388594 1179664 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-332576" [e7d2b02a-3283-4315-8b16-dc51244598e0] Running
	I0108 22:44:38.388599 1179664 system_pods.go:61] "kube-proxy-pzcc8" [a7c20d53-4cbe-4092-b05a-2e3d657ad768] Running
	I0108 22:44:38.388604 1179664 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-332576" [654757a2-75fa-4de5-858a-c7cb7368e055] Running
	I0108 22:44:38.388612 1179664 system_pods.go:61] "storage-provisioner" [4c8c01ab-8f59-4705-b9c7-4643a203e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:44:38.388625 1179664 system_pods.go:74] duration metric: took 165.942614ms to wait for pod list to return data ...
	I0108 22:44:38.388635 1179664 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:44:38.582060 1179664 request.go:629] Waited for 193.306499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 22:44:38.584591 1179664 default_sa.go:45] found service account: "default"
	I0108 22:44:38.584619 1179664 default_sa.go:55] duration metric: took 195.977923ms for default service account to be created ...
	I0108 22:44:38.584628 1179664 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:44:38.782052 1179664 request.go:629] Waited for 197.327255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:44:38.787940 1179664 system_pods.go:86] 8 kube-system pods found
	I0108 22:44:38.787972 1179664 system_pods.go:89] "coredns-66bff467f8-dtv5t" [abbdd40a-322d-427d-b244-97878e50a5dc] Running
	I0108 22:44:38.787980 1179664 system_pods.go:89] "etcd-ingress-addon-legacy-332576" [5778e50f-56a5-4060-916c-17e99641aa79] Running
	I0108 22:44:38.788011 1179664 system_pods.go:89] "kindnet-gwhc8" [fd417944-02f5-4bfb-bb47-4c0f4f1e0c30] Running
	I0108 22:44:38.788025 1179664 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-332576" [cb861063-0649-4c49-9a68-efa89ed98504] Running
	I0108 22:44:38.788030 1179664 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-332576" [e7d2b02a-3283-4315-8b16-dc51244598e0] Running
	I0108 22:44:38.788036 1179664 system_pods.go:89] "kube-proxy-pzcc8" [a7c20d53-4cbe-4092-b05a-2e3d657ad768] Running
	I0108 22:44:38.788045 1179664 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-332576" [654757a2-75fa-4de5-858a-c7cb7368e055] Running
	I0108 22:44:38.788059 1179664 system_pods.go:89] "storage-provisioner" [4c8c01ab-8f59-4705-b9c7-4643a203e9f0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 22:44:38.788099 1179664 system_pods.go:126] duration metric: took 203.463648ms to wait for k8s-apps to be running ...
	I0108 22:44:38.788125 1179664 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:44:38.788203 1179664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:44:38.802206 1179664 system_svc.go:56] duration metric: took 14.075014ms WaitForService to wait for kubelet.
	I0108 22:44:38.802281 1179664 kubeadm.go:581] duration metric: took 15.150875689s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:44:38.802315 1179664 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:44:38.981693 1179664 request.go:629] Waited for 179.296272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0108 22:44:38.984697 1179664 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 22:44:38.984730 1179664 node_conditions.go:123] node cpu capacity is 2
	I0108 22:44:38.984743 1179664 node_conditions.go:105] duration metric: took 182.416103ms to run NodePressure ...
	I0108 22:44:38.984757 1179664 start.go:228] waiting for startup goroutines ...
	I0108 22:44:38.984764 1179664 start.go:233] waiting for cluster config update ...
	I0108 22:44:38.984778 1179664 start.go:242] writing updated cluster config ...
	I0108 22:44:38.985096 1179664 ssh_runner.go:195] Run: rm -f paused
	I0108 22:44:39.053300 1179664 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 22:44:39.055736 1179664 out.go:177] 
	W0108 22:44:39.058074 1179664 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 22:44:39.060102 1179664 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 22:44:39.062271 1179664 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-332576" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.381791002Z" level=info msg="Stopped container f12191e9c64fa64e4e3d18671c9687531f66a624c63085ef0da4084066316e4d: ingress-nginx/ingress-nginx-controller-7fcf777cb7-lpcb5/controller" id=77c15e64-5c16-402b-ba5a-2804847e2848 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.384057062Z" level=info msg="Stopped container f12191e9c64fa64e4e3d18671c9687531f66a624c63085ef0da4084066316e4d: ingress-nginx/ingress-nginx-controller-7fcf777cb7-lpcb5/controller" id=9fbcf7dd-8c0c-4321-b3d0-d9e1be7c4b79 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.384607852Z" level=info msg="Stopping pod sandbox: 0aecc647caaf08415e927438d368f4cd000e11a75db0a8970f57b43b39c0b89a" id=816523e6-8ba3-4e21-8470-b5a20c001566 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.388183672Z" level=info msg="Stopping pod sandbox: 0aecc647caaf08415e927438d368f4cd000e11a75db0a8970f57b43b39c0b89a" id=db072bb0-a8dd-4107-a5c3-263b66b40aa9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.388388003Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-EDIHJGFLJKIRVV34 - [0:0]\n:KUBE-HP-4KOJEILAR4TSYPJG - [0:0]\n-X KUBE-HP-EDIHJGFLJKIRVV34\n-X KUBE-HP-4KOJEILAR4TSYPJG\nCOMMIT\n"
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.393787926Z" level=info msg="Closing host port tcp:80"
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.393837107Z" level=info msg="Closing host port tcp:443"
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.395147917Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.395173590Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.395326680Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-lpcb5 Namespace:ingress-nginx ID:0aecc647caaf08415e927438d368f4cd000e11a75db0a8970f57b43b39c0b89a UID:34d5bb63-4c27-47b3-86f2-f5ac0a752057 NetNS:/var/run/netns/dbb06a9c-0f39-469a-b97e-0ab7b5e138c9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.395469120Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-lpcb5 from CNI network \"kindnet\" (type=ptp)"
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.422591727Z" level=info msg="Stopped pod sandbox: 0aecc647caaf08415e927438d368f4cd000e11a75db0a8970f57b43b39c0b89a" id=816523e6-8ba3-4e21-8470-b5a20c001566 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 22:47:44 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:44.422710963Z" level=info msg="Stopped pod sandbox (already stopped): 0aecc647caaf08415e927438d368f4cd000e11a75db0a8970f57b43b39c0b89a" id=db072bb0-a8dd-4107-a5c3-263b66b40aa9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.191253868Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=325f48ce-1a6e-452f-ad24-449d51daf564 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.191470146Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=325f48ce-1a6e-452f-ad24-449d51daf564 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.192137784Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=3f243380-726e-4ab2-a9be-ef0b8a29c257 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.192315162Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=3f243380-726e-4ab2-a9be-ef0b8a29c257 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.193108346Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-s5vd9/hello-world-app" id=66a1bb8e-7c5d-4359-a847-d038c90d117f name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.193212239Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.268193983Z" level=info msg="Created container b09cd5468b6dbaff721eeb9dcb6869507e148b29c65a1b93fa1a7064c938dda4: default/hello-world-app-5f5d8b66bb-s5vd9/hello-world-app" id=66a1bb8e-7c5d-4359-a847-d038c90d117f name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.269180020Z" level=info msg="Starting container: b09cd5468b6dbaff721eeb9dcb6869507e148b29c65a1b93fa1a7064c938dda4" id=dd1fa82d-3dd0-4fc4-be28-d053cffc21a4 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 08 22:47:48 ingress-addon-legacy-332576 conmon[3726]: conmon b09cd5468b6dbaff721e <ninfo>: container 3739 exited with status 1
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.283574195Z" level=info msg="Started container" PID=3739 containerID=b09cd5468b6dbaff721eeb9dcb6869507e148b29c65a1b93fa1a7064c938dda4 description=default/hello-world-app-5f5d8b66bb-s5vd9/hello-world-app id=dd1fa82d-3dd0-4fc4-be28-d053cffc21a4 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=8db62a429e7bdea5f27889ca70d71b2d6eb54b418a450d8fde5bf3b26e4fedb0
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.840917343Z" level=info msg="Removing container: 345d22ecd9cf61a6c5bf32f993b0c79de2f697830bf6eb60e9a90364b6e7fb01" id=07e3af8a-7f75-4dbe-9266-b4f3629a5de3 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 08 22:47:48 ingress-addon-legacy-332576 crio[903]: time="2024-01-08 22:47:48.872199658Z" level=info msg="Removed container 345d22ecd9cf61a6c5bf32f993b0c79de2f697830bf6eb60e9a90364b6e7fb01: default/hello-world-app-5f5d8b66bb-s5vd9/hello-world-app" id=07e3af8a-7f75-4dbe-9266-b4f3629a5de3 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b09cd5468b6db       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   1 second ago        Exited              hello-world-app           2                   8db62a429e7bd       hello-world-app-5f5d8b66bb-s5vd9
	a47afa71fad2f       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                    2 minutes ago       Running             nginx                     0                   b7b541f137e2a       nginx
	f12191e9c64fa       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   0aecc647caaf0       ingress-nginx-controller-7fcf777cb7-lpcb5
	0227adf26e355       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   1152ad0219820       ingress-nginx-admission-patch-k7s2h
	99dc6f5c5833c       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   e6dd17895f411       ingress-nginx-admission-create-5mnk9
	1191a5e565347       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   05d6f3479b859       storage-provisioner
	432f6a6b7b7ac       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   9831b8e946aae       coredns-66bff467f8-dtv5t
	1497139168e8c       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   99f25d6bcf00d       kindnet-gwhc8
	68b69e1f47f66       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   59291380b9d59       kube-proxy-pzcc8
	77500e9355fbd       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   fb9c30e708b9d       etcd-ingress-addon-legacy-332576
	9b5771f501997       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   699afb16f87ce       kube-controller-manager-ingress-addon-legacy-332576
	a668dc5aba855       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   d7e182141e7f2       kube-apiserver-ingress-addon-legacy-332576
	ea62ec9797f5a       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   e3691c1015586       kube-scheduler-ingress-addon-legacy-332576
	
	
	==> coredns [432f6a6b7b7acec54bf32266753d35886f1708a370153b21017a55047245fc31] <==
	[INFO] 10.244.0.5:35690 - 59966 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061087s
	[INFO] 10.244.0.5:58312 - 46286 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00629631s
	[INFO] 10.244.0.5:35690 - 28012 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002133639s
	[INFO] 10.244.0.5:58312 - 25039 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002243062s
	[INFO] 10.244.0.5:58312 - 21950 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000399998s
	[INFO] 10.244.0.5:35690 - 30954 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001104107s
	[INFO] 10.244.0.5:35690 - 49397 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000084488s
	[INFO] 10.244.0.5:40563 - 7639 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085923s
	[INFO] 10.244.0.5:40563 - 31183 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046556s
	[INFO] 10.244.0.5:42990 - 8292 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032533s
	[INFO] 10.244.0.5:40563 - 44534 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036693s
	[INFO] 10.244.0.5:42990 - 62581 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00002395s
	[INFO] 10.244.0.5:40563 - 42441 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034773s
	[INFO] 10.244.0.5:42990 - 58136 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027348s
	[INFO] 10.244.0.5:42990 - 11350 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000162632s
	[INFO] 10.244.0.5:40563 - 17777 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041362s
	[INFO] 10.244.0.5:42990 - 39606 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045768s
	[INFO] 10.244.0.5:40563 - 25198 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042674s
	[INFO] 10.244.0.5:42990 - 20001 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055458s
	[INFO] 10.244.0.5:40563 - 63129 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001494004s
	[INFO] 10.244.0.5:42990 - 47121 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001121092s
	[INFO] 10.244.0.5:40563 - 60541 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001645708s
	[INFO] 10.244.0.5:42990 - 10800 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00377357s
	[INFO] 10.244.0.5:42990 - 48789 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054999s
	[INFO] 10.244.0.5:40563 - 27223 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038753s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-332576
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-332576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=ingress-addon-legacy-332576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_44_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:44:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-332576
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:47:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:47:41 +0000   Mon, 08 Jan 2024 22:43:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:47:41 +0000   Mon, 08 Jan 2024 22:43:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:47:41 +0000   Mon, 08 Jan 2024 22:43:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:47:41 +0000   Mon, 08 Jan 2024 22:44:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-332576
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccfa904f6d704961a4361030780119b1
	  System UUID:                34a1f1ce-32b3-4c37-a185-48981ebdac31
	  Boot ID:                    cf8959e1-1119-4140-86a9-5e54dd11ba57
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-s5vd9                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-dtv5t                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m28s
	  kube-system                 etcd-ingress-addon-legacy-332576                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kindnet-gwhc8                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-332576             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-332576    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-proxy-pzcc8                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-scheduler-ingress-addon-legacy-332576             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m53s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s (x4 over 3m53s)  kubelet     Node ingress-addon-legacy-332576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x5 over 3m53s)  kubelet     Node ingress-addon-legacy-332576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x4 over 3m53s)  kubelet     Node ingress-addon-legacy-332576 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s                  kubelet     Node ingress-addon-legacy-332576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s                  kubelet     Node ingress-addon-legacy-332576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s                  kubelet     Node ingress-addon-legacy-332576 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m25s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m19s                  kubelet     Node ingress-addon-legacy-332576 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001218] FS-Cache: O-key=[8] 'ee3f5c0100000000'
	[  +0.000867] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001067] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001231] FS-Cache: N-key=[8] 'ee3f5c0100000000'
	[  +0.004146] FS-Cache: Duplicate cookie detected
	[  +0.000852] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001155] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000b0066836
	[  +0.001211] FS-Cache: O-key=[8] 'ee3f5c0100000000'
	[  +0.000822] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001155] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000767cf050
	[  +0.001229] FS-Cache: N-key=[8] 'ee3f5c0100000000'
	[  +3.371742] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000f6accb7c
	[  +0.001152] FS-Cache: O-key=[8] 'ed3f5c0100000000'
	[  +0.000838] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000450c6a04
	[  +0.001169] FS-Cache: N-key=[8] 'ed3f5c0100000000'
	[  +0.456928] FS-Cache: Duplicate cookie detected
	[  +0.000821] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001066] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000ee2dae65
	[  +0.001165] FS-Cache: O-key=[8] 'f33f5c0100000000'
	[  +0.000814] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001079] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000024f05116
	[  +0.001224] FS-Cache: N-key=[8] 'f33f5c0100000000'
	
	
	==> etcd [77500e9355fbd8bfa822eccb52ab79b600876fb372a1859f436ebe00ccf322d8] <==
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/08 22:44:00 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 22:44:00.237570 W | auth: simple token is not cryptographically signed
	2024-01-08 22:44:00.255491 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 22:44:00.265883 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-08 22:44:00.271742 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-08 22:44:00.272150 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 22:44:00.272502 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 22:44:00.272670 I | embed: listening for peers on 192.168.49.2:2380
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/08 22:44:00 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/08 22:44:00 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-08 22:44:00.730780 I | etcdserver: published {Name:ingress-addon-legacy-332576 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-08 22:44:00.730858 I | embed: ready to serve client requests
	2024-01-08 22:44:00.731037 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 22:44:00.731114 I | embed: ready to serve client requests
	2024-01-08 22:44:00.732481 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-08 22:44:00.732584 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 22:44:00.743075 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 22:44:00.743222 I | etcdserver/api: enabled capabilities for version 3.4
	
	
	==> kernel <==
	 22:47:50 up  5:30,  0 users,  load average: 0.17, 0.94, 1.67
	Linux ingress-addon-legacy-332576 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [1497139168e8c7c21c2014d8a8e92c2143060fccd2ed40964ae6036009e53a90] <==
	I0108 22:45:45.774175       1 main.go:227] handling current node
	I0108 22:45:55.777414       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:45:55.777443       1 main.go:227] handling current node
	I0108 22:46:05.789378       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:46:05.789408       1 main.go:227] handling current node
	I0108 22:46:15.793338       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:46:15.793367       1 main.go:227] handling current node
	I0108 22:46:25.799981       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:46:25.800010       1 main.go:227] handling current node
	I0108 22:46:35.803125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:46:35.803154       1 main.go:227] handling current node
	I0108 22:46:45.814890       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:46:45.814918       1 main.go:227] handling current node
	I0108 22:46:55.819343       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:46:55.819370       1 main.go:227] handling current node
	I0108 22:47:05.827308       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:47:05.827340       1 main.go:227] handling current node
	I0108 22:47:15.833462       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:47:15.833492       1 main.go:227] handling current node
	I0108 22:47:25.837828       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:47:25.837859       1 main.go:227] handling current node
	I0108 22:47:35.844441       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:47:35.844472       1 main.go:227] handling current node
	I0108 22:47:45.848308       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0108 22:47:45.848341       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a668dc5aba85554bd3abdb892c6a0a394dae7e42b3af91f8848f791d86e1cdd7] <==
	I0108 22:44:04.616402       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0108 22:44:04.616412       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I0108 22:44:04.716714       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 22:44:04.722658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 22:44:04.722705       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0108 22:44:04.724192       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 22:44:04.724229       1 cache.go:39] Caches are synced for autoregister controller
	I0108 22:44:05.513449       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 22:44:05.513621       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 22:44:05.527834       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 22:44:05.531134       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 22:44:05.531159       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 22:44:05.954712       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 22:44:05.999380       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 22:44:06.149732       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0108 22:44:06.150841       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 22:44:06.155087       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 22:44:06.945050       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 22:44:07.687529       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 22:44:07.779409       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 22:44:11.070709       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 22:44:22.346094       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 22:44:22.426348       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 22:44:39.922162       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 22:45:04.432560       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [9b5771f501997ece6f8dc87729ac264d7ef3f117b4237b44de252b2b384dddc5] <==
	t{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000d10c30), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f29be8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000925ab0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000b2a28)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f29c30)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0108 22:44:22.667456       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a9a5b0ce-52de-4109-b696-23063fa353df", APIVersion:"apps/v1", ResourceVersion:"338", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-dtv5t
	E0108 22:44:22.741282       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"028f6b68-3213-4b22-bfd5-f6f7f2904882", ResourceVersion:"325", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63840350648, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40018ff6c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40018ff6e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40018ff700), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40018ff720)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40018ff740), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"",
UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018ff760), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*
v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018ff780), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStore
VolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.
CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018ff7a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*
v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018ff7c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018ff800)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10
0m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1
.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400191b900), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40018dde18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000352fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Tolera
tion{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000b3178)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40018dde60)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please a
pply your changes to the latest version and try again
	I0108 22:44:22.823536       1 shared_informer.go:230] Caches are synced for job 
	I0108 22:44:22.925900       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 22:44:22.932340       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0108 22:44:22.945958       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 22:44:22.945978       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 22:44:22.960594       1 shared_informer.go:230] Caches are synced for disruption 
	I0108 22:44:22.960619       1 disruption.go:339] Sending events to api server.
	I0108 22:44:22.992480       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 22:44:23.012830       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 22:44:23.199166       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"f51c65ef-8c4c-4b72-8c93-53d9759d5be1", APIVersion:"apps/v1", ResourceVersion:"379", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0108 22:44:23.300324       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a9a5b0ce-52de-4109-b696-23063fa353df", APIVersion:"apps/v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-6gz54
	I0108 22:44:32.345642       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0108 22:44:39.918406       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"24a1f0e6-72b0-4019-be4c-9b7ad514bf45", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 22:44:39.976262       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f02f81a2-ebd2-44bf-aace-6c07ea21eb32", APIVersion:"apps/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-lpcb5
	I0108 22:44:39.976311       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5ea78a07-0505-45c1-a96b-0573dca47101", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-5mnk9
	I0108 22:44:39.997464       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"bcbf23fb-ad8f-41e8-9cef-342a110ea961", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-k7s2h
	I0108 22:44:42.355141       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5ea78a07-0505-45c1-a96b-0573dca47101", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 22:44:43.352083       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"bcbf23fb-ad8f-41e8-9cef-342a110ea961", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 22:47:24.980394       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ef4be732-09fa-46fd-9688-483b7f7db061", APIVersion:"apps/v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 22:47:24.991950       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"2acec822-3187-4976-8d0b-ca8df1e0f7c5", APIVersion:"apps/v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-s5vd9
	E0108 22:47:46.818110       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-shwpw" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [68b69e1f47f66bed45cbae5b9dc70c8f6e3bff0f37eb847a55d57574f93159a0] <==
	W0108 22:44:25.311673       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 22:44:25.322457       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0108 22:44:25.322503       1 server_others.go:186] Using iptables Proxier.
	I0108 22:44:25.324036       1 server.go:583] Version: v1.18.20
	I0108 22:44:25.326193       1 config.go:315] Starting service config controller
	I0108 22:44:25.326300       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 22:44:25.326464       1 config.go:133] Starting endpoints config controller
	I0108 22:44:25.326510       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 22:44:25.426700       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0108 22:44:25.427618       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [ea62ec9797f5a4ca3dfa6be12a80380f1c7ccdc0004f60f87813e3968bcb7bf1] <==
	I0108 22:44:04.702743       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 22:44:04.702988       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 22:44:04.703027       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 22:44:04.703075       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 22:44:04.707274       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:44:04.708013       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:44:04.708117       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:44:04.708307       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:44:04.711133       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:44:04.711225       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:44:04.711291       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:44:04.711356       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:44:04.711414       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 22:44:04.711484       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:44:04.711553       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:44:04.711627       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:44:05.645608       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:44:05.663976       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:44:05.720216       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:44:05.770994       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:44:05.777188       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:44:05.940398       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 22:44:08.603611       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0108 22:44:22.688694       1 factory.go:503] pod: kube-system/coredns-66bff467f8-6gz54 is already present in the active queue
	E0108 22:44:22.745256       1 factory.go:503] pod kube-system/coredns-66bff467f8-dtv5t is already present in the backoff queue
	
	
	==> kubelet <==
	Jan 08 22:47:33 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:33.812711    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 345d22ecd9cf61a6c5bf32f993b0c79de2f697830bf6eb60e9a90364b6e7fb01
	Jan 08 22:47:33 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:33.812960    1613 pod_workers.go:191] Error syncing pod 514f399d-aef2-455f-b7fc-5693d38e9f0f ("hello-world-app-5f5d8b66bb-s5vd9_default(514f399d-aef2-455f-b7fc-5693d38e9f0f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-s5vd9_default(514f399d-aef2-455f-b7fc-5693d38e9f0f)"
	Jan 08 22:47:34 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:34.815108    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 345d22ecd9cf61a6c5bf32f993b0c79de2f697830bf6eb60e9a90364b6e7fb01
	Jan 08 22:47:34 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:34.815356    1613 pod_workers.go:191] Error syncing pod 514f399d-aef2-455f-b7fc-5693d38e9f0f ("hello-world-app-5f5d8b66bb-s5vd9_default(514f399d-aef2-455f-b7fc-5693d38e9f0f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-s5vd9_default(514f399d-aef2-455f-b7fc-5693d38e9f0f)"
	Jan 08 22:47:35 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:35.191613    1613 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 22:47:35 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:35.191662    1613 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 22:47:35 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:35.191711    1613 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 08 22:47:35 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:35.191747    1613 pod_workers.go:191] Error syncing pod bfc44a42-3d37-41ac-bd02-01ff38ef535f ("kube-ingress-dns-minikube_kube-system(bfc44a42-3d37-41ac-bd02-01ff38ef535f)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 08 22:47:41 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:41.079524    1613 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-flq5p" (UniqueName: "kubernetes.io/secret/bfc44a42-3d37-41ac-bd02-01ff38ef535f-minikube-ingress-dns-token-flq5p") pod "bfc44a42-3d37-41ac-bd02-01ff38ef535f" (UID: "bfc44a42-3d37-41ac-bd02-01ff38ef535f")
	Jan 08 22:47:41 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:41.083801    1613 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc44a42-3d37-41ac-bd02-01ff38ef535f-minikube-ingress-dns-token-flq5p" (OuterVolumeSpecName: "minikube-ingress-dns-token-flq5p") pod "bfc44a42-3d37-41ac-bd02-01ff38ef535f" (UID: "bfc44a42-3d37-41ac-bd02-01ff38ef535f"). InnerVolumeSpecName "minikube-ingress-dns-token-flq5p". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 22:47:41 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:41.180020    1613 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-flq5p" (UniqueName: "kubernetes.io/secret/bfc44a42-3d37-41ac-bd02-01ff38ef535f-minikube-ingress-dns-token-flq5p") on node "ingress-addon-legacy-332576" DevicePath ""
	Jan 08 22:47:41 ingress-addon-legacy-332576 kubelet[1613]: W0108 22:47:41.825398    1613 pod_container_deletor.go:77] Container "8920016cc238c73e5affd8b0e9f6e69a5519e2b38c8c06718d07d163d056adfe" not found in pod's containers
	Jan 08 22:47:42 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:42.202073    1613 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-lpcb5.17a880c99290a054", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-lpcb5", UID:"34d5bb63-4c27-47b3-86f2-f5ac0a752057", APIVersion:"v1", ResourceVersion:"500", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-332576"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f3cc38ba69454, ext:214599857206, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f3cc38ba69454, ext:214599857206, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-lpcb5.17a880c99290a054" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 22:47:42 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:42.214387    1613 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-lpcb5.17a880c99290a054", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-lpcb5", UID:"34d5bb63-4c27-47b3-86f2-f5ac0a752057", APIVersion:"v1", ResourceVersion:"500", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-332576"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f3cc38ba69454, ext:214599857206, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f3cc38b696173, ext:214595846484, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-lpcb5.17a880c99290a054" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 22:47:44 ingress-addon-legacy-332576 kubelet[1613]: W0108 22:47:44.830870    1613 pod_container_deletor.go:77] Container "0aecc647caaf08415e927438d368f4cd000e11a75db0a8970f57b43b39c0b89a" not found in pod's containers
	Jan 08 22:47:45 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:45.010167    1613 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/34d5bb63-4c27-47b3-86f2-f5ac0a752057-webhook-cert") pod "34d5bb63-4c27-47b3-86f2-f5ac0a752057" (UID: "34d5bb63-4c27-47b3-86f2-f5ac0a752057")
	Jan 08 22:47:45 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:45.010250    1613 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-bd52r" (UniqueName: "kubernetes.io/secret/34d5bb63-4c27-47b3-86f2-f5ac0a752057-ingress-nginx-token-bd52r") pod "34d5bb63-4c27-47b3-86f2-f5ac0a752057" (UID: "34d5bb63-4c27-47b3-86f2-f5ac0a752057")
	Jan 08 22:47:45 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:45.042879    1613 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34d5bb63-4c27-47b3-86f2-f5ac0a752057-ingress-nginx-token-bd52r" (OuterVolumeSpecName: "ingress-nginx-token-bd52r") pod "34d5bb63-4c27-47b3-86f2-f5ac0a752057" (UID: "34d5bb63-4c27-47b3-86f2-f5ac0a752057"). InnerVolumeSpecName "ingress-nginx-token-bd52r". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 22:47:45 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:45.044139    1613 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34d5bb63-4c27-47b3-86f2-f5ac0a752057-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "34d5bb63-4c27-47b3-86f2-f5ac0a752057" (UID: "34d5bb63-4c27-47b3-86f2-f5ac0a752057"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 22:47:45 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:45.110696    1613 reconciler.go:319] Volume detached for volume "ingress-nginx-token-bd52r" (UniqueName: "kubernetes.io/secret/34d5bb63-4c27-47b3-86f2-f5ac0a752057-ingress-nginx-token-bd52r") on node "ingress-addon-legacy-332576" DevicePath ""
	Jan 08 22:47:45 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:45.110777    1613 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/34d5bb63-4c27-47b3-86f2-f5ac0a752057-webhook-cert") on node "ingress-addon-legacy-332576" DevicePath ""
	Jan 08 22:47:48 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:48.190729    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 345d22ecd9cf61a6c5bf32f993b0c79de2f697830bf6eb60e9a90364b6e7fb01
	Jan 08 22:47:48 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:48.838634    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 345d22ecd9cf61a6c5bf32f993b0c79de2f697830bf6eb60e9a90364b6e7fb01
	Jan 08 22:47:48 ingress-addon-legacy-332576 kubelet[1613]: I0108 22:47:48.838881    1613 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b09cd5468b6dbaff721eeb9dcb6869507e148b29c65a1b93fa1a7064c938dda4
	Jan 08 22:47:48 ingress-addon-legacy-332576 kubelet[1613]: E0108 22:47:48.839146    1613 pod_workers.go:191] Error syncing pod 514f399d-aef2-455f-b7fc-5693d38e9f0f ("hello-world-app-5f5d8b66bb-s5vd9_default(514f399d-aef2-455f-b7fc-5693d38e9f0f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-s5vd9_default(514f399d-aef2-455f-b7fc-5693d38e9f0f)"
	
	
	==> storage-provisioner [1191a5e56534756730242b87ea71713f6d6d360b4ad2cb55739a993ce8660b14] <==
	I0108 22:44:38.496026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:44:38.512730       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:44:38.512862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:44:38.521081       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:44:38.521554       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dcfa3f60-c955-4f2b-9956-7596297f2607", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-332576_f41e7cde-29fe-4a29-81dc-731b3b6c4320 became leader
	I0108 22:44:38.521896       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-332576_f41e7cde-29fe-4a29-81dc-731b3b6c4320!
	I0108 22:44:38.622294       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-332576_f41e7cde-29fe-4a29-81dc-731b3b6c4320!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-332576 -n ingress-addon-legacy-332576
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-332576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-5qwgb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-5qwgb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-5qwgb -- sh -c "ping -c 1 192.168.58.1": exit status 1 (246.196822ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-5qwgb): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-kcr7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-kcr7b -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-kcr7b -- sh -c "ping -c 1 192.168.58.1": exit status 1 (233.243663ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-kcr7b): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-265402
helpers_test.go:235: (dbg) docker inspect multinode-265402:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7",
	        "Created": "2024-01-08T22:54:32.507622446Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1216384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T22:54:32.82967524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3167e60a71dbae425a4b9caa3fc8f52cf3c3b5035be6746ce0af2b692a3018d8",
	        "ResolvConfPath": "/var/lib/docker/containers/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7/hosts",
	        "LogPath": "/var/lib/docker/containers/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7-json.log",
	        "Name": "/multinode-265402",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-265402:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-265402",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8d184cfa938ad64f3cf5bf55694e48de8b0c58a884ab0659515e90437fd0e3ed-init/diff:/var/lib/docker/overlay2/38e0010c12bf0b8a699570be0a9e49c2514b24d0012b6438a157027e46de7e51/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8d184cfa938ad64f3cf5bf55694e48de8b0c58a884ab0659515e90437fd0e3ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8d184cfa938ad64f3cf5bf55694e48de8b0c58a884ab0659515e90437fd0e3ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8d184cfa938ad64f3cf5bf55694e48de8b0c58a884ab0659515e90437fd0e3ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-265402",
	                "Source": "/var/lib/docker/volumes/multinode-265402/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-265402",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-265402",
	                "name.minikube.sigs.k8s.io": "multinode-265402",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bf7fdae97f634761104edbac51c4370e523325d6118145b4ce978254480b19f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34107"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34104"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34105"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bf7fdae97f6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-265402": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "04ff89274b48",
	                        "multinode-265402"
	                    ],
	                    "NetworkID": "19e438b586a42da7f7ec5c97c9b94134ebcf374d73f9c8faadd1af7b911f59bc",
	                    "EndpointID": "735c1e4dc343963f884337897d44c5903eb30e3a946e31b9f1734a79b3afc379",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-265402 -n multinode-265402
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-265402 logs -n 25: (1.751206986s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-439643                           | mount-start-2-439643 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-439643 ssh -- ls                    | mount-start-2-439643 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-437452                           | mount-start-1-437452 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-439643 ssh -- ls                    | mount-start-2-439643 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-439643                           | mount-start-2-439643 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| start   | -p mount-start-2-439643                           | mount-start-2-439643 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| ssh     | mount-start-2-439643 ssh -- ls                    | mount-start-2-439643 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-439643                           | mount-start-2-439643 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| delete  | -p mount-start-1-437452                           | mount-start-1-437452 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| start   | -p multinode-265402                               | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:56 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- apply -f                   | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- rollout                    | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- get pods -o                | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- get pods -o                | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-5qwgb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-kcr7b --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-5qwgb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-kcr7b --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-5qwgb -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-kcr7b -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- get pods -o                | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-5qwgb                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC |                     |
	|         | busybox-5bc68d56bd-5qwgb -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC | 08 Jan 24 22:56 UTC |
	|         | busybox-5bc68d56bd-kcr7b                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-265402 -- exec                       | multinode-265402     | jenkins | v1.32.0 | 08 Jan 24 22:56 UTC |                     |
	|         | busybox-5bc68d56bd-kcr7b -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:54:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:54:27.031439 1215935 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:54:27.031679 1215935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:27.031709 1215935 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:27.031731 1215935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:27.032021 1215935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 22:54:27.032544 1215935 out.go:303] Setting JSON to false
	I0108 22:54:27.033554 1215935 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20207,"bootTime":1704734260,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:54:27.033664 1215935 start.go:138] virtualization:  
	I0108 22:54:27.037671 1215935 out.go:177] * [multinode-265402] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 22:54:27.039633 1215935 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:54:27.041418 1215935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:54:27.039835 1215935 notify.go:220] Checking for updates...
	I0108 22:54:27.045204 1215935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:54:27.047048 1215935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:54:27.049378 1215935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 22:54:27.051294 1215935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:54:27.053530 1215935 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:54:27.078837 1215935 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:54:27.078968 1215935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:54:27.147773 1215935 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 22:54:27.137582005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:54:27.147888 1215935 docker.go:295] overlay module found
	I0108 22:54:27.150156 1215935 out.go:177] * Using the docker driver based on user configuration
	I0108 22:54:27.152199 1215935 start.go:298] selected driver: docker
	I0108 22:54:27.152218 1215935 start.go:902] validating driver "docker" against <nil>
	I0108 22:54:27.152233 1215935 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:54:27.152879 1215935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:54:27.226802 1215935 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-08 22:54:27.217204818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:54:27.226979 1215935 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 22:54:27.227290 1215935 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:54:27.229351 1215935 out.go:177] * Using Docker driver with root privileges
	I0108 22:54:27.231280 1215935 cni.go:84] Creating CNI manager for ""
	I0108 22:54:27.231300 1215935 cni.go:136] 0 nodes found, recommending kindnet
	I0108 22:54:27.231317 1215935 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 22:54:27.231337 1215935 start_flags.go:321] config:
	{Name:multinode-265402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-265402 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:54:27.233466 1215935 out.go:177] * Starting control plane node multinode-265402 in cluster multinode-265402
	I0108 22:54:27.235342 1215935 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:54:27.237282 1215935 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 22:54:27.239117 1215935 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:54:27.239154 1215935 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 22:54:27.239171 1215935 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0108 22:54:27.239178 1215935 cache.go:56] Caching tarball of preloaded images
	I0108 22:54:27.239268 1215935 preload.go:174] Found /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0108 22:54:27.239278 1215935 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:54:27.239667 1215935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/config.json ...
	I0108 22:54:27.239699 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/config.json: {Name:mk3835062e479a60c64ef87a201e3d6de7825451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:27.266914 1215935 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 22:54:27.266940 1215935 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	I0108 22:54:27.266954 1215935 cache.go:194] Successfully downloaded all kic artifacts
	I0108 22:54:27.267005 1215935 start.go:365] acquiring machines lock for multinode-265402: {Name:mk88531f87b740d6d3923e1e55a5adcab3a8c934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:54:27.267115 1215935 start.go:369] acquired machines lock for "multinode-265402" in 88.451µs
	I0108 22:54:27.267146 1215935 start.go:93] Provisioning new machine with config: &{Name:multinode-265402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-265402 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:54:27.267230 1215935 start.go:125] createHost starting for "" (driver="docker")
	I0108 22:54:27.269624 1215935 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 22:54:27.269879 1215935 start.go:159] libmachine.API.Create for "multinode-265402" (driver="docker")
	I0108 22:54:27.269913 1215935 client.go:168] LocalClient.Create starting
	I0108 22:54:27.269988 1215935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem
	I0108 22:54:27.270028 1215935 main.go:141] libmachine: Decoding PEM data...
	I0108 22:54:27.270048 1215935 main.go:141] libmachine: Parsing certificate...
	I0108 22:54:27.270098 1215935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem
	I0108 22:54:27.270120 1215935 main.go:141] libmachine: Decoding PEM data...
	I0108 22:54:27.270140 1215935 main.go:141] libmachine: Parsing certificate...
	I0108 22:54:27.270510 1215935 cli_runner.go:164] Run: docker network inspect multinode-265402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 22:54:27.287966 1215935 cli_runner.go:211] docker network inspect multinode-265402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 22:54:27.288065 1215935 network_create.go:281] running [docker network inspect multinode-265402] to gather additional debugging logs...
	I0108 22:54:27.288090 1215935 cli_runner.go:164] Run: docker network inspect multinode-265402
	W0108 22:54:27.305675 1215935 cli_runner.go:211] docker network inspect multinode-265402 returned with exit code 1
	I0108 22:54:27.305705 1215935 network_create.go:284] error running [docker network inspect multinode-265402]: docker network inspect multinode-265402: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-265402 not found
	I0108 22:54:27.305726 1215935 network_create.go:286] output of [docker network inspect multinode-265402]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-265402 not found
	
	** /stderr **
	I0108 22:54:27.305821 1215935 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:54:27.323686 1215935 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28dcec50f1fd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ff:2c:12:22} reservation:<nil>}
	I0108 22:54:27.324062 1215935 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400217e160}
	I0108 22:54:27.324084 1215935 network_create.go:124] attempt to create docker network multinode-265402 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0108 22:54:27.324148 1215935 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-265402 multinode-265402
	I0108 22:54:27.396495 1215935 network_create.go:108] docker network multinode-265402 192.168.58.0/24 created
	I0108 22:54:27.396527 1215935 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-265402" container
	I0108 22:54:27.396612 1215935 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 22:54:27.417836 1215935 cli_runner.go:164] Run: docker volume create multinode-265402 --label name.minikube.sigs.k8s.io=multinode-265402 --label created_by.minikube.sigs.k8s.io=true
	I0108 22:54:27.436457 1215935 oci.go:103] Successfully created a docker volume multinode-265402
	I0108 22:54:27.436549 1215935 cli_runner.go:164] Run: docker run --rm --name multinode-265402-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265402 --entrypoint /usr/bin/test -v multinode-265402:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 22:54:28.034362 1215935 oci.go:107] Successfully prepared a docker volume multinode-265402
	I0108 22:54:28.034421 1215935 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:54:28.034441 1215935 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 22:54:28.034524 1215935 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-265402:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 22:54:32.425088 1215935 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-265402:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (4.390521832s)
	I0108 22:54:32.425122 1215935 kic.go:203] duration metric: took 4.390678 seconds to extract preloaded images to volume
	W0108 22:54:32.425293 1215935 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 22:54:32.425409 1215935 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 22:54:32.489409 1215935 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-265402 --name multinode-265402 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265402 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-265402 --network multinode-265402 --ip 192.168.58.2 --volume multinode-265402:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 22:54:32.839169 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Running}}
	I0108 22:54:32.871080 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 22:54:32.903456 1215935 cli_runner.go:164] Run: docker exec multinode-265402 stat /var/lib/dpkg/alternatives/iptables
	I0108 22:54:32.989468 1215935 oci.go:144] the created container "multinode-265402" has a running status.
	I0108 22:54:32.989496 1215935 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa...
	I0108 22:54:33.236487 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 22:54:33.236580 1215935 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 22:54:33.263857 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 22:54:33.293852 1215935 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 22:54:33.293872 1215935 kic_runner.go:114] Args: [docker exec --privileged multinode-265402 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 22:54:33.376149 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 22:54:33.404271 1215935 machine.go:88] provisioning docker machine ...
	I0108 22:54:33.404320 1215935 ubuntu.go:169] provisioning hostname "multinode-265402"
	I0108 22:54:33.404389 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:33.429791 1215935 main.go:141] libmachine: Using SSH client type: native
	I0108 22:54:33.430218 1215935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34108 <nil> <nil>}
	I0108 22:54:33.430230 1215935 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-265402 && echo "multinode-265402" | sudo tee /etc/hostname
	I0108 22:54:33.430921 1215935 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0108 22:54:36.577009 1215935 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-265402
	
	I0108 22:54:36.577091 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:36.596292 1215935 main.go:141] libmachine: Using SSH client type: native
	I0108 22:54:36.596716 1215935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34108 <nil> <nil>}
	I0108 22:54:36.596734 1215935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-265402' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-265402/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-265402' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:54:36.734040 1215935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:54:36.734070 1215935 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 22:54:36.734088 1215935 ubuntu.go:177] setting up certificates
	I0108 22:54:36.734096 1215935 provision.go:83] configureAuth start
	I0108 22:54:36.734217 1215935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402
	I0108 22:54:36.755343 1215935 provision.go:138] copyHostCerts
	I0108 22:54:36.755393 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 22:54:36.755426 1215935 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem, removing ...
	I0108 22:54:36.755437 1215935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 22:54:36.755512 1215935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 22:54:36.755588 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 22:54:36.755630 1215935 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem, removing ...
	I0108 22:54:36.755638 1215935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 22:54:36.755664 1215935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 22:54:36.755709 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 22:54:36.755729 1215935 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem, removing ...
	I0108 22:54:36.755738 1215935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 22:54:36.755763 1215935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 22:54:36.755812 1215935 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.multinode-265402 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-265402]
	I0108 22:54:37.057804 1215935 provision.go:172] copyRemoteCerts
	I0108 22:54:37.057896 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:54:37.057936 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:37.076571 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:54:37.175742 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 22:54:37.175809 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:54:37.205078 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 22:54:37.205142 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 22:54:37.233628 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 22:54:37.233704 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:54:37.262474 1215935 provision.go:86] duration metric: configureAuth took 528.363889ms
	I0108 22:54:37.262541 1215935 ubuntu.go:193] setting minikube options for container-runtime
	I0108 22:54:37.262749 1215935 config.go:182] Loaded profile config "multinode-265402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:54:37.262863 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:37.280704 1215935 main.go:141] libmachine: Using SSH client type: native
	I0108 22:54:37.281276 1215935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34108 <nil> <nil>}
	I0108 22:54:37.281301 1215935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:54:37.527202 1215935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:54:37.527222 1215935 machine.go:91] provisioned docker machine in 4.122928189s
	I0108 22:54:37.527232 1215935 client.go:171] LocalClient.Create took 10.257312877s
	I0108 22:54:37.527244 1215935 start.go:167] duration metric: libmachine.API.Create for "multinode-265402" took 10.257365267s
	I0108 22:54:37.527251 1215935 start.go:300] post-start starting for "multinode-265402" (driver="docker")
	I0108 22:54:37.527261 1215935 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:54:37.527329 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:54:37.527367 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:37.545683 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:54:37.644097 1215935 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:54:37.648289 1215935 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 22:54:37.648313 1215935 command_runner.go:130] > NAME="Ubuntu"
	I0108 22:54:37.648320 1215935 command_runner.go:130] > VERSION_ID="22.04"
	I0108 22:54:37.648327 1215935 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 22:54:37.648333 1215935 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 22:54:37.648338 1215935 command_runner.go:130] > ID=ubuntu
	I0108 22:54:37.648342 1215935 command_runner.go:130] > ID_LIKE=debian
	I0108 22:54:37.648348 1215935 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 22:54:37.648358 1215935 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 22:54:37.648365 1215935 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 22:54:37.648373 1215935 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 22:54:37.648378 1215935 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 22:54:37.648432 1215935 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 22:54:37.648456 1215935 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 22:54:37.648467 1215935 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 22:54:37.648475 1215935 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 22:54:37.648485 1215935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 22:54:37.648542 1215935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 22:54:37.648629 1215935 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> 11522512.pem in /etc/ssl/certs
	I0108 22:54:37.648635 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> /etc/ssl/certs/11522512.pem
	I0108 22:54:37.648734 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:54:37.659377 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 22:54:37.689102 1215935 start.go:303] post-start completed in 161.834473ms
	I0108 22:54:37.689472 1215935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402
	I0108 22:54:37.707176 1215935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/config.json ...
	I0108 22:54:37.707461 1215935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 22:54:37.707518 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:37.726749 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:54:37.819046 1215935 command_runner.go:130] > 18%!
	(MISSING)I0108 22:54:37.819429 1215935 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 22:54:37.825270 1215935 command_runner.go:130] > 160G
	I0108 22:54:37.825311 1215935 start.go:128] duration metric: createHost completed in 10.558066878s
	I0108 22:54:37.825320 1215935 start.go:83] releasing machines lock for "multinode-265402", held for 10.55819134s
	I0108 22:54:37.825398 1215935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402
	I0108 22:54:37.843337 1215935 ssh_runner.go:195] Run: cat /version.json
	I0108 22:54:37.843397 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:37.843345 1215935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:54:37.843518 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:54:37.863258 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:54:37.872371 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:54:37.957481 1215935 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703790982-17866", "minikube_version": "v1.32.0", "commit": "1553e31a427d433b292e8b2292123d8c426f06f5"}
	I0108 22:54:37.957700 1215935 ssh_runner.go:195] Run: systemctl --version
	I0108 22:54:38.117257 1215935 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 22:54:38.120467 1215935 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0108 22:54:38.120502 1215935 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0108 22:54:38.120575 1215935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:54:38.267855 1215935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 22:54:38.273050 1215935 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 22:54:38.273075 1215935 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 22:54:38.273084 1215935 command_runner.go:130] > Device: 3ah/58d	Inode: 1044667     Links: 1
	I0108 22:54:38.273092 1215935 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 22:54:38.273099 1215935 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 22:54:38.273109 1215935 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 22:54:38.273116 1215935 command_runner.go:130] > Change: 2024-01-08 22:30:53.898544185 +0000
	I0108 22:54:38.273124 1215935 command_runner.go:130] >  Birth: 2024-01-08 22:30:53.898544185 +0000
	I0108 22:54:38.273568 1215935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:54:38.298558 1215935 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 22:54:38.298679 1215935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:54:38.339672 1215935 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 22:54:38.339702 1215935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 22:54:38.339710 1215935 start.go:475] detecting cgroup driver to use...
	I0108 22:54:38.339768 1215935 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 22:54:38.339845 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:54:38.358471 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:54:38.371910 1215935 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:54:38.372016 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:54:38.388210 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:54:38.405106 1215935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:54:38.497295 1215935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:54:38.609741 1215935 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 22:54:38.609892 1215935 docker.go:219] disabling docker service ...
	I0108 22:54:38.609966 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:54:38.632567 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:54:38.647140 1215935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:54:38.751861 1215935 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 22:54:38.752005 1215935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:54:38.854284 1215935 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 22:54:38.854363 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:54:38.868381 1215935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:54:38.887450 1215935 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 22:54:38.888805 1215935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:54:38.888918 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:54:38.901870 1215935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:54:38.901995 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:54:38.914899 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:54:38.927630 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:54:38.939704 1215935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:54:38.951331 1215935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:54:38.960417 1215935 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 22:54:38.961456 1215935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:54:38.971495 1215935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:54:39.065807 1215935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:54:39.201281 1215935 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:54:39.201427 1215935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:54:39.206069 1215935 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 22:54:39.206139 1215935 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 22:54:39.206172 1215935 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I0108 22:54:39.206208 1215935 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 22:54:39.206231 1215935 command_runner.go:130] > Access: 2024-01-08 22:54:39.184903568 +0000
	I0108 22:54:39.206252 1215935 command_runner.go:130] > Modify: 2024-01-08 22:54:39.184903568 +0000
	I0108 22:54:39.206287 1215935 command_runner.go:130] > Change: 2024-01-08 22:54:39.184903568 +0000
	I0108 22:54:39.206307 1215935 command_runner.go:130] >  Birth: -
	I0108 22:54:39.206581 1215935 start.go:543] Will wait 60s for crictl version
	I0108 22:54:39.206668 1215935 ssh_runner.go:195] Run: which crictl
	I0108 22:54:39.210785 1215935 command_runner.go:130] > /usr/bin/crictl
	I0108 22:54:39.211278 1215935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:54:39.250862 1215935 command_runner.go:130] > Version:  0.1.0
	I0108 22:54:39.250939 1215935 command_runner.go:130] > RuntimeName:  cri-o
	I0108 22:54:39.250977 1215935 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 22:54:39.251005 1215935 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 22:54:39.253677 1215935 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 22:54:39.253805 1215935 ssh_runner.go:195] Run: crio --version
	I0108 22:54:39.299264 1215935 command_runner.go:130] > crio version 1.24.6
	I0108 22:54:39.299285 1215935 command_runner.go:130] > Version:          1.24.6
	I0108 22:54:39.299297 1215935 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 22:54:39.299303 1215935 command_runner.go:130] > GitTreeState:     clean
	I0108 22:54:39.299350 1215935 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 22:54:39.299368 1215935 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 22:54:39.299379 1215935 command_runner.go:130] > Compiler:         gc
	I0108 22:54:39.299385 1215935 command_runner.go:130] > Platform:         linux/arm64
	I0108 22:54:39.299405 1215935 command_runner.go:130] > Linkmode:         dynamic
	I0108 22:54:39.299432 1215935 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 22:54:39.299445 1215935 command_runner.go:130] > SeccompEnabled:   true
	I0108 22:54:39.299452 1215935 command_runner.go:130] > AppArmorEnabled:  false
	I0108 22:54:39.301503 1215935 ssh_runner.go:195] Run: crio --version
	I0108 22:54:39.343749 1215935 command_runner.go:130] > crio version 1.24.6
	I0108 22:54:39.343815 1215935 command_runner.go:130] > Version:          1.24.6
	I0108 22:54:39.343845 1215935 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 22:54:39.343878 1215935 command_runner.go:130] > GitTreeState:     clean
	I0108 22:54:39.343899 1215935 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 22:54:39.343922 1215935 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 22:54:39.343946 1215935 command_runner.go:130] > Compiler:         gc
	I0108 22:54:39.343972 1215935 command_runner.go:130] > Platform:         linux/arm64
	I0108 22:54:39.343994 1215935 command_runner.go:130] > Linkmode:         dynamic
	I0108 22:54:39.344029 1215935 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 22:54:39.344057 1215935 command_runner.go:130] > SeccompEnabled:   true
	I0108 22:54:39.344078 1215935 command_runner.go:130] > AppArmorEnabled:  false
	I0108 22:54:39.350851 1215935 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 22:54:39.353483 1215935 cli_runner.go:164] Run: docker network inspect multinode-265402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:54:39.373903 1215935 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 22:54:39.378759 1215935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:54:39.392481 1215935 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:54:39.392556 1215935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:54:39.461423 1215935 command_runner.go:130] > {
	I0108 22:54:39.461444 1215935 command_runner.go:130] >   "images": [
	I0108 22:54:39.461450 1215935 command_runner.go:130] >     {
	I0108 22:54:39.461461 1215935 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0108 22:54:39.461476 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.461484 1215935 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 22:54:39.461488 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461494 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.461505 1215935 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 22:54:39.461522 1215935 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0108 22:54:39.461527 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461533 1215935 command_runner.go:130] >       "size": "60867618",
	I0108 22:54:39.461538 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.461543 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.461558 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.461563 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.461568 1215935 command_runner.go:130] >     },
	I0108 22:54:39.461572 1215935 command_runner.go:130] >     {
	I0108 22:54:39.461580 1215935 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0108 22:54:39.461586 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.461596 1215935 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 22:54:39.461601 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461608 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.461618 1215935 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0108 22:54:39.461628 1215935 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0108 22:54:39.461632 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461641 1215935 command_runner.go:130] >       "size": "29037500",
	I0108 22:54:39.461646 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.461651 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.461656 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.461661 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.461665 1215935 command_runner.go:130] >     },
	I0108 22:54:39.461669 1215935 command_runner.go:130] >     {
	I0108 22:54:39.461677 1215935 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0108 22:54:39.461682 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.461688 1215935 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 22:54:39.461693 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461698 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.461707 1215935 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0108 22:54:39.461716 1215935 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0108 22:54:39.461722 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461728 1215935 command_runner.go:130] >       "size": "51393451",
	I0108 22:54:39.461732 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.461738 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.461742 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.461749 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.461753 1215935 command_runner.go:130] >     },
	I0108 22:54:39.461757 1215935 command_runner.go:130] >     {
	I0108 22:54:39.461765 1215935 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0108 22:54:39.461770 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.461775 1215935 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 22:54:39.461780 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461785 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.461795 1215935 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0108 22:54:39.461804 1215935 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0108 22:54:39.461814 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461819 1215935 command_runner.go:130] >       "size": "182203183",
	I0108 22:54:39.461824 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.461830 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.461834 1215935 command_runner.go:130] >       },
	I0108 22:54:39.461839 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.461844 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.461849 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.461853 1215935 command_runner.go:130] >     },
	I0108 22:54:39.461857 1215935 command_runner.go:130] >     {
	I0108 22:54:39.461868 1215935 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0108 22:54:39.461874 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.461880 1215935 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 22:54:39.461888 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461893 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.461906 1215935 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0108 22:54:39.461915 1215935 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0108 22:54:39.461923 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.461929 1215935 command_runner.go:130] >       "size": "121119694",
	I0108 22:54:39.461933 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.461946 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.461953 1215935 command_runner.go:130] >       },
	I0108 22:54:39.461958 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.461963 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.461971 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.461975 1215935 command_runner.go:130] >     },
	I0108 22:54:39.461980 1215935 command_runner.go:130] >     {
	I0108 22:54:39.461987 1215935 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0108 22:54:39.461992 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.461999 1215935 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 22:54:39.462010 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462015 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.462025 1215935 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 22:54:39.462035 1215935 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0108 22:54:39.462039 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462045 1215935 command_runner.go:130] >       "size": "117252916",
	I0108 22:54:39.462050 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.462055 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.462059 1215935 command_runner.go:130] >       },
	I0108 22:54:39.462066 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.462071 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.462076 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.462080 1215935 command_runner.go:130] >     },
	I0108 22:54:39.462084 1215935 command_runner.go:130] >     {
	I0108 22:54:39.462091 1215935 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0108 22:54:39.462096 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.462102 1215935 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 22:54:39.462106 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462111 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.462120 1215935 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0108 22:54:39.462129 1215935 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 22:54:39.462133 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462138 1215935 command_runner.go:130] >       "size": "69992343",
	I0108 22:54:39.462143 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.462148 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.462153 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.462158 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.462164 1215935 command_runner.go:130] >     },
	I0108 22:54:39.462169 1215935 command_runner.go:130] >     {
	I0108 22:54:39.462176 1215935 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0108 22:54:39.462182 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.462188 1215935 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 22:54:39.462192 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462197 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.462227 1215935 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 22:54:39.462236 1215935 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0108 22:54:39.462241 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462246 1215935 command_runner.go:130] >       "size": "59253556",
	I0108 22:54:39.462255 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.462260 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.462264 1215935 command_runner.go:130] >       },
	I0108 22:54:39.462269 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.462274 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.462279 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.462283 1215935 command_runner.go:130] >     },
	I0108 22:54:39.462290 1215935 command_runner.go:130] >     {
	I0108 22:54:39.462298 1215935 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0108 22:54:39.462307 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.462316 1215935 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 22:54:39.462320 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462326 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.462337 1215935 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0108 22:54:39.462350 1215935 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0108 22:54:39.462355 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.462360 1215935 command_runner.go:130] >       "size": "520014",
	I0108 22:54:39.462365 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.462370 1215935 command_runner.go:130] >         "value": "65535"
	I0108 22:54:39.462374 1215935 command_runner.go:130] >       },
	I0108 22:54:39.462379 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.462384 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.462389 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.462393 1215935 command_runner.go:130] >     }
	I0108 22:54:39.462400 1215935 command_runner.go:130] >   ]
	I0108 22:54:39.462407 1215935 command_runner.go:130] > }
	I0108 22:54:39.464437 1215935 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:54:39.464523 1215935 crio.go:415] Images already preloaded, skipping extraction
	I0108 22:54:39.464621 1215935 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:54:39.506480 1215935 command_runner.go:130] > {
	I0108 22:54:39.506506 1215935 command_runner.go:130] >   "images": [
	I0108 22:54:39.506512 1215935 command_runner.go:130] >     {
	I0108 22:54:39.506522 1215935 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0108 22:54:39.506529 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.506536 1215935 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 22:54:39.506545 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506554 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.506569 1215935 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 22:54:39.506581 1215935 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0108 22:54:39.506589 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506595 1215935 command_runner.go:130] >       "size": "60867618",
	I0108 22:54:39.506602 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.506608 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.506617 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.506622 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.506630 1215935 command_runner.go:130] >     },
	I0108 22:54:39.506634 1215935 command_runner.go:130] >     {
	I0108 22:54:39.506642 1215935 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0108 22:54:39.506651 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.506658 1215935 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 22:54:39.506663 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506668 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.506678 1215935 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0108 22:54:39.506687 1215935 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0108 22:54:39.506692 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506699 1215935 command_runner.go:130] >       "size": "29037500",
	I0108 22:54:39.506704 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.506709 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.506714 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.506725 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.506732 1215935 command_runner.go:130] >     },
	I0108 22:54:39.506737 1215935 command_runner.go:130] >     {
	I0108 22:54:39.506747 1215935 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0108 22:54:39.506752 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.506758 1215935 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 22:54:39.506765 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506770 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.506779 1215935 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0108 22:54:39.506796 1215935 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0108 22:54:39.506803 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506808 1215935 command_runner.go:130] >       "size": "51393451",
	I0108 22:54:39.506813 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.506821 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.506826 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.506831 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.506837 1215935 command_runner.go:130] >     },
	I0108 22:54:39.506842 1215935 command_runner.go:130] >     {
	I0108 22:54:39.506851 1215935 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0108 22:54:39.506859 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.506866 1215935 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 22:54:39.506872 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506878 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.506890 1215935 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0108 22:54:39.506900 1215935 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0108 22:54:39.506915 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.506924 1215935 command_runner.go:130] >       "size": "182203183",
	I0108 22:54:39.506928 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.506934 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.506940 1215935 command_runner.go:130] >       },
	I0108 22:54:39.506945 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.506950 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.506958 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.506962 1215935 command_runner.go:130] >     },
	I0108 22:54:39.506966 1215935 command_runner.go:130] >     {
	I0108 22:54:39.506977 1215935 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0108 22:54:39.506984 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.506994 1215935 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 22:54:39.507000 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507008 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.507017 1215935 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0108 22:54:39.507028 1215935 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0108 22:54:39.507033 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507041 1215935 command_runner.go:130] >       "size": "121119694",
	I0108 22:54:39.507045 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.507051 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.507058 1215935 command_runner.go:130] >       },
	I0108 22:54:39.507063 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.507068 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.507075 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.507079 1215935 command_runner.go:130] >     },
	I0108 22:54:39.507084 1215935 command_runner.go:130] >     {
	I0108 22:54:39.507094 1215935 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0108 22:54:39.507099 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.507109 1215935 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 22:54:39.507114 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507122 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.507131 1215935 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 22:54:39.507144 1215935 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0108 22:54:39.507149 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507156 1215935 command_runner.go:130] >       "size": "117252916",
	I0108 22:54:39.507161 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.507166 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.507173 1215935 command_runner.go:130] >       },
	I0108 22:54:39.507177 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.507182 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.507190 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.507194 1215935 command_runner.go:130] >     },
	I0108 22:54:39.507198 1215935 command_runner.go:130] >     {
	I0108 22:54:39.507209 1215935 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0108 22:54:39.507214 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.507222 1215935 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 22:54:39.507229 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507237 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.507246 1215935 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0108 22:54:39.507258 1215935 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 22:54:39.507263 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507271 1215935 command_runner.go:130] >       "size": "69992343",
	I0108 22:54:39.507275 1215935 command_runner.go:130] >       "uid": null,
	I0108 22:54:39.507280 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.507288 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.507293 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.507297 1215935 command_runner.go:130] >     },
	I0108 22:54:39.507304 1215935 command_runner.go:130] >     {
	I0108 22:54:39.507313 1215935 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0108 22:54:39.507320 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.507326 1215935 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 22:54:39.507333 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507338 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.507359 1215935 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 22:54:39.507374 1215935 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0108 22:54:39.507380 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507386 1215935 command_runner.go:130] >       "size": "59253556",
	I0108 22:54:39.507393 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.507398 1215935 command_runner.go:130] >         "value": "0"
	I0108 22:54:39.507403 1215935 command_runner.go:130] >       },
	I0108 22:54:39.507410 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.507415 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.507420 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.507427 1215935 command_runner.go:130] >     },
	I0108 22:54:39.507431 1215935 command_runner.go:130] >     {
	I0108 22:54:39.507439 1215935 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0108 22:54:39.507447 1215935 command_runner.go:130] >       "repoTags": [
	I0108 22:54:39.507452 1215935 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 22:54:39.507459 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507464 1215935 command_runner.go:130] >       "repoDigests": [
	I0108 22:54:39.507475 1215935 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0108 22:54:39.507484 1215935 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0108 22:54:39.507494 1215935 command_runner.go:130] >       ],
	I0108 22:54:39.507499 1215935 command_runner.go:130] >       "size": "520014",
	I0108 22:54:39.507507 1215935 command_runner.go:130] >       "uid": {
	I0108 22:54:39.507512 1215935 command_runner.go:130] >         "value": "65535"
	I0108 22:54:39.507516 1215935 command_runner.go:130] >       },
	I0108 22:54:39.507524 1215935 command_runner.go:130] >       "username": "",
	I0108 22:54:39.507529 1215935 command_runner.go:130] >       "spec": null,
	I0108 22:54:39.507534 1215935 command_runner.go:130] >       "pinned": false
	I0108 22:54:39.507540 1215935 command_runner.go:130] >     }
	I0108 22:54:39.507544 1215935 command_runner.go:130] >   ]
	I0108 22:54:39.507549 1215935 command_runner.go:130] > }
	I0108 22:54:39.510455 1215935 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 22:54:39.510479 1215935 cache_images.go:84] Images are preloaded, skipping loading
	I0108 22:54:39.510554 1215935 ssh_runner.go:195] Run: crio config
	I0108 22:54:39.567628 1215935 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 22:54:39.567657 1215935 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 22:54:39.567667 1215935 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 22:54:39.567672 1215935 command_runner.go:130] > #
	I0108 22:54:39.567681 1215935 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 22:54:39.567692 1215935 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 22:54:39.567703 1215935 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 22:54:39.567717 1215935 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 22:54:39.567725 1215935 command_runner.go:130] > # reload'.
	I0108 22:54:39.567732 1215935 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 22:54:39.567740 1215935 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 22:54:39.567752 1215935 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 22:54:39.567760 1215935 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 22:54:39.567768 1215935 command_runner.go:130] > [crio]
	I0108 22:54:39.567775 1215935 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 22:54:39.567782 1215935 command_runner.go:130] > # containers images, in this directory.
	I0108 22:54:39.568382 1215935 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 22:54:39.568402 1215935 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 22:54:39.568922 1215935 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 22:54:39.568940 1215935 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 22:54:39.568949 1215935 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 22:54:39.569529 1215935 command_runner.go:130] > # storage_driver = "vfs"
	I0108 22:54:39.569547 1215935 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 22:54:39.569556 1215935 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 22:54:39.569797 1215935 command_runner.go:130] > # storage_option = [
	I0108 22:54:39.570077 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.570103 1215935 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 22:54:39.570153 1215935 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 22:54:39.570848 1215935 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 22:54:39.570876 1215935 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 22:54:39.570894 1215935 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 22:54:39.570901 1215935 command_runner.go:130] > # always happen on a node reboot
	I0108 22:54:39.571531 1215935 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 22:54:39.571548 1215935 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 22:54:39.571556 1215935 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 22:54:39.571576 1215935 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 22:54:39.572069 1215935 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 22:54:39.572089 1215935 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 22:54:39.572101 1215935 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 22:54:39.573046 1215935 command_runner.go:130] > # internal_wipe = true
	I0108 22:54:39.573065 1215935 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 22:54:39.573075 1215935 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 22:54:39.573088 1215935 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 22:54:39.573623 1215935 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 22:54:39.573642 1215935 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 22:54:39.573648 1215935 command_runner.go:130] > [crio.api]
	I0108 22:54:39.573655 1215935 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 22:54:39.574138 1215935 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 22:54:39.574154 1215935 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 22:54:39.574630 1215935 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 22:54:39.574660 1215935 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 22:54:39.574669 1215935 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 22:54:39.575143 1215935 command_runner.go:130] > # stream_port = "0"
	I0108 22:54:39.575158 1215935 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 22:54:39.575611 1215935 command_runner.go:130] > # stream_enable_tls = false
	I0108 22:54:39.575627 1215935 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 22:54:39.575983 1215935 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 22:54:39.576000 1215935 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 22:54:39.576008 1215935 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 22:54:39.576013 1215935 command_runner.go:130] > # minutes.
	I0108 22:54:39.576501 1215935 command_runner.go:130] > # stream_tls_cert = ""
	I0108 22:54:39.576518 1215935 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 22:54:39.576527 1215935 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 22:54:39.577189 1215935 command_runner.go:130] > # stream_tls_key = ""
	I0108 22:54:39.577215 1215935 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 22:54:39.577224 1215935 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 22:54:39.577242 1215935 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 22:54:39.577857 1215935 command_runner.go:130] > # stream_tls_ca = ""
	I0108 22:54:39.577876 1215935 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 22:54:39.578817 1215935 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 22:54:39.578837 1215935 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 22:54:39.579716 1215935 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 22:54:39.579754 1215935 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 22:54:39.579765 1215935 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 22:54:39.579773 1215935 command_runner.go:130] > [crio.runtime]
	I0108 22:54:39.579781 1215935 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 22:54:39.579790 1215935 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 22:54:39.579796 1215935 command_runner.go:130] > # "nofile=1024:2048"
	I0108 22:54:39.579808 1215935 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 22:54:39.580055 1215935 command_runner.go:130] > # default_ulimits = [
	I0108 22:54:39.580322 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.580338 1215935 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 22:54:39.580809 1215935 command_runner.go:130] > # no_pivot = false
	I0108 22:54:39.580832 1215935 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 22:54:39.580841 1215935 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 22:54:39.581327 1215935 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 22:54:39.581342 1215935 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 22:54:39.581354 1215935 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 22:54:39.581363 1215935 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 22:54:39.581724 1215935 command_runner.go:130] > # conmon = ""
	I0108 22:54:39.581750 1215935 command_runner.go:130] > # Cgroup setting for conmon
	I0108 22:54:39.581760 1215935 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 22:54:39.582006 1215935 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 22:54:39.582022 1215935 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 22:54:39.582030 1215935 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 22:54:39.582039 1215935 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 22:54:39.582266 1215935 command_runner.go:130] > # conmon_env = [
	I0108 22:54:39.582513 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.582529 1215935 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 22:54:39.582537 1215935 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 22:54:39.582544 1215935 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 22:54:39.582766 1215935 command_runner.go:130] > # default_env = [
	I0108 22:54:39.583048 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.583065 1215935 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 22:54:39.583578 1215935 command_runner.go:130] > # selinux = false
	I0108 22:54:39.583595 1215935 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 22:54:39.583603 1215935 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 22:54:39.583610 1215935 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 22:54:39.584003 1215935 command_runner.go:130] > # seccomp_profile = ""
	I0108 22:54:39.584019 1215935 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 22:54:39.584027 1215935 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 22:54:39.584035 1215935 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 22:54:39.584041 1215935 command_runner.go:130] > # which might increase security.
	I0108 22:54:39.584595 1215935 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 22:54:39.584614 1215935 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 22:54:39.584623 1215935 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 22:54:39.584631 1215935 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 22:54:39.584638 1215935 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 22:54:39.584645 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:54:39.585175 1215935 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 22:54:39.585198 1215935 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 22:54:39.585204 1215935 command_runner.go:130] > # the cgroup blockio controller.
	I0108 22:54:39.585615 1215935 command_runner.go:130] > # blockio_config_file = ""
	I0108 22:54:39.585633 1215935 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 22:54:39.585639 1215935 command_runner.go:130] > # irqbalance daemon.
	I0108 22:54:39.586168 1215935 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 22:54:39.586185 1215935 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 22:54:39.586192 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:54:39.586590 1215935 command_runner.go:130] > # rdt_config_file = ""
	I0108 22:54:39.586605 1215935 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 22:54:39.586892 1215935 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 22:54:39.586908 1215935 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 22:54:39.587315 1215935 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 22:54:39.587333 1215935 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 22:54:39.587341 1215935 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 22:54:39.587346 1215935 command_runner.go:130] > # will be added.
	I0108 22:54:39.587601 1215935 command_runner.go:130] > # default_capabilities = [
	I0108 22:54:39.587993 1215935 command_runner.go:130] > # 	"CHOWN",
	I0108 22:54:39.588296 1215935 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 22:54:39.588604 1215935 command_runner.go:130] > # 	"FSETID",
	I0108 22:54:39.588939 1215935 command_runner.go:130] > # 	"FOWNER",
	I0108 22:54:39.589254 1215935 command_runner.go:130] > # 	"SETGID",
	I0108 22:54:39.589589 1215935 command_runner.go:130] > # 	"SETUID",
	I0108 22:54:39.589882 1215935 command_runner.go:130] > # 	"SETPCAP",
	I0108 22:54:39.590189 1215935 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 22:54:39.590493 1215935 command_runner.go:130] > # 	"KILL",
	I0108 22:54:39.590783 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.590803 1215935 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 22:54:39.590813 1215935 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 22:54:39.591432 1215935 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 22:54:39.591450 1215935 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 22:54:39.591458 1215935 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 22:54:39.591715 1215935 command_runner.go:130] > # default_sysctls = [
	I0108 22:54:39.591999 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.592016 1215935 command_runner.go:130] > # List of devices on the host that a
	I0108 22:54:39.592024 1215935 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 22:54:39.592282 1215935 command_runner.go:130] > # allowed_devices = [
	I0108 22:54:39.592593 1215935 command_runner.go:130] > # 	"/dev/fuse",
	I0108 22:54:39.592864 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.592879 1215935 command_runner.go:130] > # List of additional devices. specified as
	I0108 22:54:39.592912 1215935 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 22:54:39.592922 1215935 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 22:54:39.592930 1215935 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 22:54:39.593214 1215935 command_runner.go:130] > # additional_devices = [
	I0108 22:54:39.593499 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.593520 1215935 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 22:54:39.593778 1215935 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 22:54:39.594094 1215935 command_runner.go:130] > # 	"/etc/cdi",
	I0108 22:54:39.594373 1215935 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 22:54:39.594631 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.594648 1215935 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 22:54:39.594656 1215935 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 22:54:39.594662 1215935 command_runner.go:130] > # Defaults to false.
	I0108 22:54:39.595166 1215935 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 22:54:39.595183 1215935 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 22:54:39.595191 1215935 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 22:54:39.595415 1215935 command_runner.go:130] > # hooks_dir = [
	I0108 22:54:39.595698 1215935 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 22:54:39.595964 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.595981 1215935 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 22:54:39.595989 1215935 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 22:54:39.595996 1215935 command_runner.go:130] > # its default mounts from the following two files:
	I0108 22:54:39.596001 1215935 command_runner.go:130] > #
	I0108 22:54:39.596016 1215935 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 22:54:39.596027 1215935 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 22:54:39.596034 1215935 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 22:54:39.596038 1215935 command_runner.go:130] > #
	I0108 22:54:39.596049 1215935 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 22:54:39.596058 1215935 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 22:54:39.596069 1215935 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 22:54:39.596075 1215935 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 22:54:39.596079 1215935 command_runner.go:130] > #
	I0108 22:54:39.596480 1215935 command_runner.go:130] > # default_mounts_file = ""
	I0108 22:54:39.596497 1215935 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 22:54:39.596506 1215935 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 22:54:39.596969 1215935 command_runner.go:130] > # pids_limit = 0
	I0108 22:54:39.596987 1215935 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 22:54:39.597007 1215935 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 22:54:39.597016 1215935 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 22:54:39.597026 1215935 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 22:54:39.597524 1215935 command_runner.go:130] > # log_size_max = -1
	I0108 22:54:39.597542 1215935 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 22:54:39.598031 1215935 command_runner.go:130] > # log_to_journald = false
	I0108 22:54:39.598047 1215935 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 22:54:39.599473 1215935 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 22:54:39.599494 1215935 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 22:54:39.599501 1215935 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 22:54:39.599508 1215935 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 22:54:39.599517 1215935 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 22:54:39.599526 1215935 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 22:54:39.599531 1215935 command_runner.go:130] > # read_only = false
	I0108 22:54:39.599542 1215935 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 22:54:39.599549 1215935 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 22:54:39.599555 1215935 command_runner.go:130] > # live configuration reload.
	I0108 22:54:39.599562 1215935 command_runner.go:130] > # log_level = "info"
	I0108 22:54:39.599569 1215935 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 22:54:39.599581 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:54:39.599586 1215935 command_runner.go:130] > # log_filter = ""
	I0108 22:54:39.599593 1215935 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 22:54:39.599604 1215935 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 22:54:39.599614 1215935 command_runner.go:130] > # separated by comma.
	I0108 22:54:39.599619 1215935 command_runner.go:130] > # uid_mappings = ""
	I0108 22:54:39.599627 1215935 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 22:54:39.599642 1215935 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 22:54:39.599647 1215935 command_runner.go:130] > # separated by comma.
	I0108 22:54:39.599652 1215935 command_runner.go:130] > # gid_mappings = ""
	I0108 22:54:39.599662 1215935 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 22:54:39.599671 1215935 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 22:54:39.599682 1215935 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 22:54:39.599688 1215935 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 22:54:39.599706 1215935 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 22:54:39.599713 1215935 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 22:54:39.599721 1215935 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 22:54:39.599733 1215935 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 22:54:39.599745 1215935 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 22:54:39.599755 1215935 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 22:54:39.599763 1215935 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 22:54:39.599770 1215935 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 22:54:39.599777 1215935 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 22:54:39.599785 1215935 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 22:54:39.599795 1215935 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 22:54:39.599803 1215935 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 22:54:39.599811 1215935 command_runner.go:130] > # drop_infra_ctr = true
	I0108 22:54:39.599819 1215935 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 22:54:39.599826 1215935 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 22:54:39.599837 1215935 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 22:54:39.599844 1215935 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 22:54:39.599853 1215935 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 22:54:39.599861 1215935 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 22:54:39.599866 1215935 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 22:54:39.599875 1215935 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 22:54:39.599882 1215935 command_runner.go:130] > # pinns_path = ""
	I0108 22:54:39.599890 1215935 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 22:54:39.599901 1215935 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 22:54:39.599908 1215935 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 22:54:39.599914 1215935 command_runner.go:130] > # default_runtime = "runc"
	I0108 22:54:39.599922 1215935 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 22:54:39.599933 1215935 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 22:54:39.599947 1215935 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 22:54:39.599958 1215935 command_runner.go:130] > # creation as a file is not desired either.
	I0108 22:54:39.599969 1215935 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 22:54:39.599977 1215935 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 22:54:39.599983 1215935 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 22:54:39.599987 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.599995 1215935 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 22:54:39.600005 1215935 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 22:54:39.600014 1215935 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 22:54:39.600024 1215935 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 22:54:39.600028 1215935 command_runner.go:130] > #
	I0108 22:54:39.600034 1215935 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 22:54:39.600043 1215935 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 22:54:39.600048 1215935 command_runner.go:130] > #  runtime_type = "oci"
	I0108 22:54:39.600054 1215935 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 22:54:39.600063 1215935 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 22:54:39.600068 1215935 command_runner.go:130] > #  allowed_annotations = []
	I0108 22:54:39.600075 1215935 command_runner.go:130] > # Where:
	I0108 22:54:39.600081 1215935 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 22:54:39.600095 1215935 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 22:54:39.600107 1215935 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 22:54:39.600114 1215935 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 22:54:39.600119 1215935 command_runner.go:130] > #   in $PATH.
	I0108 22:54:39.600130 1215935 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 22:54:39.600138 1215935 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 22:54:39.600147 1215935 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 22:54:39.600154 1215935 command_runner.go:130] > #   state.
	I0108 22:54:39.600162 1215935 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 22:54:39.600169 1215935 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 22:54:39.600177 1215935 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 22:54:39.600190 1215935 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 22:54:39.600197 1215935 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 22:54:39.600206 1215935 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 22:54:39.600214 1215935 command_runner.go:130] > #   The currently recognized values are:
	I0108 22:54:39.600222 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 22:54:39.600231 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 22:54:39.600245 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 22:54:39.600261 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 22:54:39.600273 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 22:54:39.600281 1215935 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 22:54:39.600289 1215935 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 22:54:39.600313 1215935 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 22:54:39.600321 1215935 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 22:54:39.600329 1215935 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 22:54:39.600336 1215935 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 22:54:39.600341 1215935 command_runner.go:130] > runtime_type = "oci"
	I0108 22:54:39.600346 1215935 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 22:54:39.600354 1215935 command_runner.go:130] > runtime_config_path = ""
	I0108 22:54:39.600359 1215935 command_runner.go:130] > monitor_path = ""
	I0108 22:54:39.600366 1215935 command_runner.go:130] > monitor_cgroup = ""
	I0108 22:54:39.600371 1215935 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 22:54:39.600409 1215935 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 22:54:39.600418 1215935 command_runner.go:130] > # running containers
	I0108 22:54:39.600423 1215935 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 22:54:39.600431 1215935 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 22:54:39.600442 1215935 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 22:54:39.600452 1215935 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 22:54:39.600461 1215935 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 22:54:39.600467 1215935 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 22:54:39.600485 1215935 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 22:54:39.600491 1215935 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 22:54:39.600503 1215935 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 22:54:39.600509 1215935 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 22:54:39.600517 1215935 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 22:54:39.600530 1215935 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 22:54:39.600538 1215935 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 22:54:39.600556 1215935 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 22:54:39.600566 1215935 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 22:54:39.600577 1215935 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 22:54:39.600588 1215935 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 22:54:39.600601 1215935 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 22:54:39.600608 1215935 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 22:54:39.600617 1215935 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 22:54:39.600626 1215935 command_runner.go:130] > # Example:
	I0108 22:54:39.600632 1215935 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 22:54:39.600646 1215935 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 22:54:39.600652 1215935 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 22:54:39.600665 1215935 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 22:54:39.600672 1215935 command_runner.go:130] > # cpuset = 0
	I0108 22:54:39.600677 1215935 command_runner.go:130] > # cpushares = "0-1"
	I0108 22:54:39.600681 1215935 command_runner.go:130] > # Where:
	I0108 22:54:39.600689 1215935 command_runner.go:130] > # The workload name is workload-type.
	I0108 22:54:39.600698 1215935 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 22:54:39.600704 1215935 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 22:54:39.600715 1215935 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 22:54:39.600727 1215935 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 22:54:39.600738 1215935 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 22:54:39.600742 1215935 command_runner.go:130] > # 
	I0108 22:54:39.600752 1215935 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 22:54:39.600756 1215935 command_runner.go:130] > #
	I0108 22:54:39.600765 1215935 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 22:54:39.600778 1215935 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 22:54:39.600796 1215935 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 22:54:39.600813 1215935 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 22:54:39.600829 1215935 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 22:54:39.600837 1215935 command_runner.go:130] > [crio.image]
	I0108 22:54:39.600846 1215935 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 22:54:39.600854 1215935 command_runner.go:130] > # default_transport = "docker://"
	I0108 22:54:39.600863 1215935 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 22:54:39.600874 1215935 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 22:54:39.600882 1215935 command_runner.go:130] > # global_auth_file = ""
	I0108 22:54:39.600888 1215935 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 22:54:39.600897 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:54:39.600903 1215935 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 22:54:39.600911 1215935 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 22:54:39.600921 1215935 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 22:54:39.600927 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:54:39.600935 1215935 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 22:54:39.600942 1215935 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 22:54:39.600952 1215935 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 22:54:39.600963 1215935 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 22:54:39.600973 1215935 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 22:54:39.600987 1215935 command_runner.go:130] > # pause_command = "/pause"
	I0108 22:54:39.601014 1215935 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 22:54:39.601022 1215935 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 22:54:39.601030 1215935 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 22:54:39.601049 1215935 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 22:54:39.601056 1215935 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 22:54:39.601064 1215935 command_runner.go:130] > # signature_policy = ""
	I0108 22:54:39.601072 1215935 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 22:54:39.601082 1215935 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 22:54:39.601089 1215935 command_runner.go:130] > # changing them here.
	I0108 22:54:39.601094 1215935 command_runner.go:130] > # insecure_registries = [
	I0108 22:54:39.601098 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.601105 1215935 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 22:54:39.601112 1215935 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 22:54:39.601121 1215935 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 22:54:39.601129 1215935 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 22:54:39.601140 1215935 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 22:54:39.601151 1215935 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 22:54:39.601156 1215935 command_runner.go:130] > # CNI plugins.
	I0108 22:54:39.601163 1215935 command_runner.go:130] > [crio.network]
	I0108 22:54:39.601170 1215935 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 22:54:39.601177 1215935 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 22:54:39.601182 1215935 command_runner.go:130] > # cni_default_network = ""
	I0108 22:54:39.601189 1215935 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 22:54:39.601194 1215935 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 22:54:39.601204 1215935 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 22:54:39.601209 1215935 command_runner.go:130] > # plugin_dirs = [
	I0108 22:54:39.601214 1215935 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 22:54:39.601220 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.601227 1215935 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 22:54:39.601232 1215935 command_runner.go:130] > [crio.metrics]
	I0108 22:54:39.601241 1215935 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 22:54:39.601246 1215935 command_runner.go:130] > # enable_metrics = false
	I0108 22:54:39.601254 1215935 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 22:54:39.601260 1215935 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 22:54:39.601269 1215935 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 22:54:39.601279 1215935 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 22:54:39.601289 1215935 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 22:54:39.601294 1215935 command_runner.go:130] > # metrics_collectors = [
	I0108 22:54:39.601299 1215935 command_runner.go:130] > # 	"operations",
	I0108 22:54:39.601307 1215935 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 22:54:39.601320 1215935 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 22:54:39.601324 1215935 command_runner.go:130] > # 	"operations_errors",
	I0108 22:54:39.601329 1215935 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 22:54:39.601334 1215935 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 22:54:39.601340 1215935 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 22:54:39.601348 1215935 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 22:54:39.601356 1215935 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 22:54:39.601367 1215935 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 22:54:39.601377 1215935 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 22:54:39.601383 1215935 command_runner.go:130] > # 	"containers_oom_total",
	I0108 22:54:39.601395 1215935 command_runner.go:130] > # 	"containers_oom",
	I0108 22:54:39.601401 1215935 command_runner.go:130] > # 	"processes_defunct",
	I0108 22:54:39.601406 1215935 command_runner.go:130] > # 	"operations_total",
	I0108 22:54:39.601414 1215935 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 22:54:39.601420 1215935 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 22:54:39.601428 1215935 command_runner.go:130] > # 	"operations_errors_total",
	I0108 22:54:39.601436 1215935 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 22:54:39.601442 1215935 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 22:54:39.601457 1215935 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 22:54:39.601465 1215935 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 22:54:39.601471 1215935 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 22:54:39.601476 1215935 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 22:54:39.601483 1215935 command_runner.go:130] > # ]
	I0108 22:54:39.601489 1215935 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 22:54:39.601494 1215935 command_runner.go:130] > # metrics_port = 9090
	I0108 22:54:39.601500 1215935 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 22:54:39.601505 1215935 command_runner.go:130] > # metrics_socket = ""
	I0108 22:54:39.601514 1215935 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 22:54:39.601530 1215935 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 22:54:39.601538 1215935 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 22:54:39.601547 1215935 command_runner.go:130] > # certificate on any modification event.
	I0108 22:54:39.601552 1215935 command_runner.go:130] > # metrics_cert = ""
	I0108 22:54:39.601562 1215935 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 22:54:39.601571 1215935 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 22:54:39.601579 1215935 command_runner.go:130] > # metrics_key = ""
	I0108 22:54:39.601586 1215935 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 22:54:39.601590 1215935 command_runner.go:130] > [crio.tracing]
	I0108 22:54:39.601601 1215935 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 22:54:39.601612 1215935 command_runner.go:130] > # enable_tracing = false
	I0108 22:54:39.601618 1215935 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 22:54:39.601624 1215935 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 22:54:39.601632 1215935 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 22:54:39.601638 1215935 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 22:54:39.601646 1215935 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 22:54:39.601653 1215935 command_runner.go:130] > [crio.stats]
	I0108 22:54:39.601660 1215935 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 22:54:39.601668 1215935 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 22:54:39.601673 1215935 command_runner.go:130] > # stats_collection_period = 0
	I0108 22:54:39.603872 1215935 command_runner.go:130] ! time="2024-01-08 22:54:39.560680199Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 22:54:39.603909 1215935 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 22:54:39.604250 1215935 cni.go:84] Creating CNI manager for ""
	I0108 22:54:39.604271 1215935 cni.go:136] 1 nodes found, recommending kindnet
	I0108 22:54:39.604313 1215935 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:54:39.604337 1215935 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-265402 NodeName:multinode-265402 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:54:39.604490 1215935 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-265402"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:54:39.604581 1215935 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-265402 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-265402 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:54:39.604656 1215935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:54:39.614729 1215935 command_runner.go:130] > kubeadm
	I0108 22:54:39.614747 1215935 command_runner.go:130] > kubectl
	I0108 22:54:39.614752 1215935 command_runner.go:130] > kubelet
	I0108 22:54:39.615869 1215935 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:54:39.615989 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:54:39.627012 1215935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0108 22:54:39.648327 1215935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:54:39.670595 1215935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0108 22:54:39.691942 1215935 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 22:54:39.696704 1215935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:54:39.710317 1215935 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402 for IP: 192.168.58.2
	I0108 22:54:39.710351 1215935 certs.go:190] acquiring lock for shared ca certs: {Name:mk2f5e9ada40477437d91c2ac8d6b62bb5d1e97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:39.710533 1215935 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key
	I0108 22:54:39.710577 1215935 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key
	I0108 22:54:39.710630 1215935 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key
	I0108 22:54:39.710646 1215935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt with IP's: []
	I0108 22:54:39.844760 1215935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt ...
	I0108 22:54:39.844791 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt: {Name:mk1af81073fa163cefe755e07bcecf04d155a3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:39.845024 1215935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key ...
	I0108 22:54:39.845040 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key: {Name:mk8e7d9d1e7972d7280a24262cf95d316597e6de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:39.845143 1215935 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.key.cee25041
	I0108 22:54:39.845163 1215935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:54:40.526699 1215935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.crt.cee25041 ...
	I0108 22:54:40.526734 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.crt.cee25041: {Name:mk16ae425be02fb522446216085234e2af2d8b1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:40.526922 1215935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.key.cee25041 ...
	I0108 22:54:40.526938 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.key.cee25041: {Name:mk6031932b90eca064515e7f24a457f694e32155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:40.527024 1215935 certs.go:337] copying /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.crt
	I0108 22:54:40.527118 1215935 certs.go:341] copying /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.key
	I0108 22:54:40.527180 1215935 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.key
	I0108 22:54:40.527199 1215935 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.crt with IP's: []
	I0108 22:54:40.769762 1215935 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.crt ...
	I0108 22:54:40.769793 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.crt: {Name:mk979a1da59426c995a118c51c36b0b23d31140c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:40.769984 1215935 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.key ...
	I0108 22:54:40.769998 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.key: {Name:mk0ae742ca680c29a0b396817becf2e0e1697e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:40.770087 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 22:54:40.770114 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 22:54:40.770127 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 22:54:40.770142 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 22:54:40.770155 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 22:54:40.770167 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 22:54:40.770182 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 22:54:40.770200 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 22:54:40.770258 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem (1338 bytes)
	W0108 22:54:40.770300 1215935 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251_empty.pem, impossibly tiny 0 bytes
	I0108 22:54:40.770314 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:54:40.770343 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:54:40.770373 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:54:40.770401 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem (1675 bytes)
	I0108 22:54:40.770452 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 22:54:40.770482 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:54:40.770496 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem -> /usr/share/ca-certificates/1152251.pem
	I0108 22:54:40.770507 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> /usr/share/ca-certificates/11522512.pem
	I0108 22:54:40.771091 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:54:40.800228 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 22:54:40.828778 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:54:40.857302 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:54:40.885879 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:54:40.914576 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:54:40.943302 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:54:40.972265 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:54:41.001615 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:54:41.033819 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem --> /usr/share/ca-certificates/1152251.pem (1338 bytes)
	I0108 22:54:41.062556 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /usr/share/ca-certificates/11522512.pem (1708 bytes)
	I0108 22:54:41.091197 1215935 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:54:41.112422 1215935 ssh_runner.go:195] Run: openssl version
	I0108 22:54:41.121067 1215935 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 22:54:41.121164 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11522512.pem && ln -fs /usr/share/ca-certificates/11522512.pem /etc/ssl/certs/11522512.pem"
	I0108 22:54:41.133612 1215935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11522512.pem
	I0108 22:54:41.138264 1215935 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 22:39 /usr/share/ca-certificates/11522512.pem
	I0108 22:54:41.138393 1215935 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 22:39 /usr/share/ca-certificates/11522512.pem
	I0108 22:54:41.138465 1215935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11522512.pem
	I0108 22:54:41.146960 1215935 command_runner.go:130] > 3ec20f2e
	I0108 22:54:41.147374 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11522512.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:54:41.158976 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:54:41.170316 1215935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:54:41.174826 1215935 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:54:41.174860 1215935 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:54:41.174933 1215935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:54:41.184041 1215935 command_runner.go:130] > b5213941
	I0108 22:54:41.184110 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:54:41.195966 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1152251.pem && ln -fs /usr/share/ca-certificates/1152251.pem /etc/ssl/certs/1152251.pem"
	I0108 22:54:41.207365 1215935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1152251.pem
	I0108 22:54:41.212101 1215935 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 22:39 /usr/share/ca-certificates/1152251.pem
	I0108 22:54:41.212130 1215935 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 22:39 /usr/share/ca-certificates/1152251.pem
	I0108 22:54:41.212178 1215935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1152251.pem
	I0108 22:54:41.220272 1215935 command_runner.go:130] > 51391683
	I0108 22:54:41.220740 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1152251.pem /etc/ssl/certs/51391683.0"
	I0108 22:54:41.232663 1215935 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:54:41.236966 1215935 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:54:41.237013 1215935 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:54:41.237057 1215935 kubeadm.go:404] StartCluster: {Name:multinode-265402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-265402 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:54:41.237134 1215935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 22:54:41.237191 1215935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:54:41.281076 1215935 cri.go:89] found id: ""
	I0108 22:54:41.281212 1215935 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:54:41.292003 1215935 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 22:54:41.292029 1215935 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 22:54:41.292037 1215935 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 22:54:41.292143 1215935 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:54:41.302787 1215935 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0108 22:54:41.302879 1215935 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:54:41.313799 1215935 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 22:54:41.313823 1215935 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 22:54:41.313833 1215935 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 22:54:41.313852 1215935 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:54:41.313898 1215935 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:54:41.313933 1215935 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0108 22:54:41.367507 1215935 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:54:41.367578 1215935 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 22:54:41.367882 1215935 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:54:41.367929 1215935 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 22:54:41.415438 1215935 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0108 22:54:41.415519 1215935 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 22:54:41.415589 1215935 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0108 22:54:41.415619 1215935 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0108 22:54:41.415667 1215935 kubeadm.go:322] OS: Linux
	I0108 22:54:41.415685 1215935 command_runner.go:130] > OS: Linux
	I0108 22:54:41.415751 1215935 kubeadm.go:322] CGROUPS_CPU: enabled
	I0108 22:54:41.415777 1215935 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 22:54:41.415869 1215935 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0108 22:54:41.415894 1215935 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 22:54:41.415968 1215935 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0108 22:54:41.415994 1215935 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 22:54:41.416070 1215935 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0108 22:54:41.416095 1215935 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 22:54:41.416172 1215935 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0108 22:54:41.416197 1215935 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 22:54:41.416273 1215935 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0108 22:54:41.416299 1215935 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 22:54:41.416381 1215935 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0108 22:54:41.416406 1215935 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 22:54:41.416489 1215935 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0108 22:54:41.416517 1215935 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 22:54:41.416592 1215935 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0108 22:54:41.416612 1215935 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 22:54:41.493302 1215935 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:54:41.493380 1215935 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:54:41.493513 1215935 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:54:41.493539 1215935 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:54:41.493649 1215935 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:54:41.493673 1215935 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:54:41.753132 1215935 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:54:41.758388 1215935 out.go:204]   - Generating certificates and keys ...
	I0108 22:54:41.753442 1215935 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:54:41.758583 1215935 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:54:41.758616 1215935 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 22:54:41.758701 1215935 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:54:41.758727 1215935 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 22:54:42.034975 1215935 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:54:42.035048 1215935 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:54:42.280795 1215935 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:54:42.280878 1215935 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:54:42.719828 1215935 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:54:42.719909 1215935 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 22:54:43.437583 1215935 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:54:43.437609 1215935 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 22:54:43.898014 1215935 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:54:43.898045 1215935 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 22:54:43.898167 1215935 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-265402] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 22:54:43.898180 1215935 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-265402] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 22:54:44.839094 1215935 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:54:44.839121 1215935 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 22:54:44.839499 1215935 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-265402] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 22:54:44.839514 1215935 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-265402] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0108 22:54:45.033329 1215935 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:54:45.033358 1215935 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:54:45.649939 1215935 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:54:45.649969 1215935 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:54:45.952692 1215935 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:54:45.952722 1215935 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 22:54:45.952932 1215935 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:54:45.952965 1215935 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:54:46.705795 1215935 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:54:46.705827 1215935 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:54:47.034962 1215935 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:54:47.034977 1215935 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:54:47.579474 1215935 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:54:47.579521 1215935 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:54:48.008560 1215935 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:54:48.008588 1215935 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:54:48.008666 1215935 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:54:48.008672 1215935 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:54:48.012517 1215935 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:54:48.015034 1215935 out.go:204]   - Booting up control plane ...
	I0108 22:54:48.012623 1215935 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:54:48.015138 1215935 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:54:48.015159 1215935 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:54:48.015232 1215935 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:54:48.015252 1215935 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:54:48.016389 1215935 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:54:48.016410 1215935 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:54:48.029016 1215935 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:54:48.029073 1215935 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:54:48.030176 1215935 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:54:48.030204 1215935 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:54:48.030248 1215935 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:54:48.030261 1215935 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 22:54:48.137696 1215935 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:54:48.137736 1215935 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:54:56.640422 1215935 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502987 seconds
	I0108 22:54:56.640449 1215935 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502987 seconds
	I0108 22:54:56.640571 1215935 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:54:56.640599 1215935 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:54:56.657010 1215935 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:54:56.657038 1215935 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:54:57.183978 1215935 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:54:57.184004 1215935 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:54:57.184177 1215935 kubeadm.go:322] [mark-control-plane] Marking the node multinode-265402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:54:57.184199 1215935 command_runner.go:130] > [mark-control-plane] Marking the node multinode-265402 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:54:57.696720 1215935 kubeadm.go:322] [bootstrap-token] Using token: z7usit.937p2xfdwdd6p8dq
	I0108 22:54:57.698697 1215935 out.go:204]   - Configuring RBAC rules ...
	I0108 22:54:57.696824 1215935 command_runner.go:130] > [bootstrap-token] Using token: z7usit.937p2xfdwdd6p8dq
	I0108 22:54:57.698882 1215935 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:54:57.698909 1215935 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:54:57.704648 1215935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:54:57.704681 1215935 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:54:57.716215 1215935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:54:57.716239 1215935 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:54:57.721179 1215935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:54:57.721198 1215935 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:54:57.726515 1215935 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:54:57.726539 1215935 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:54:57.730546 1215935 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:54:57.730569 1215935 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:54:57.745856 1215935 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:54:57.745883 1215935 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:54:58.009453 1215935 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:54:58.009478 1215935 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 22:54:58.125332 1215935 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:54:58.125366 1215935 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 22:54:58.126361 1215935 kubeadm.go:322] 
	I0108 22:54:58.126443 1215935 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:54:58.126456 1215935 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 22:54:58.126465 1215935 kubeadm.go:322] 
	I0108 22:54:58.126538 1215935 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:54:58.126547 1215935 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 22:54:58.126551 1215935 kubeadm.go:322] 
	I0108 22:54:58.126576 1215935 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:54:58.126584 1215935 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 22:54:58.126639 1215935 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:54:58.126646 1215935 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:54:58.126697 1215935 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:54:58.126711 1215935 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:54:58.126717 1215935 kubeadm.go:322] 
	I0108 22:54:58.126775 1215935 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:54:58.126784 1215935 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 22:54:58.126788 1215935 kubeadm.go:322] 
	I0108 22:54:58.126833 1215935 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:54:58.126842 1215935 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:54:58.126846 1215935 kubeadm.go:322] 
	I0108 22:54:58.126895 1215935 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:54:58.126915 1215935 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 22:54:58.126985 1215935 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:54:58.126993 1215935 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:54:58.127066 1215935 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:54:58.127074 1215935 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:54:58.127079 1215935 kubeadm.go:322] 
	I0108 22:54:58.127169 1215935 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:54:58.127177 1215935 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:54:58.127249 1215935 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:54:58.127259 1215935 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 22:54:58.127263 1215935 kubeadm.go:322] 
	I0108 22:54:58.127345 1215935 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z7usit.937p2xfdwdd6p8dq \
	I0108 22:54:58.127354 1215935 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token z7usit.937p2xfdwdd6p8dq \
	I0108 22:54:58.127455 1215935 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 \
	I0108 22:54:58.127466 1215935 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 \
	I0108 22:54:58.127486 1215935 kubeadm.go:322] 	--control-plane 
	I0108 22:54:58.127494 1215935 command_runner.go:130] > 	--control-plane 
	I0108 22:54:58.127498 1215935 kubeadm.go:322] 
	I0108 22:54:58.127578 1215935 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:54:58.127586 1215935 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:54:58.127590 1215935 kubeadm.go:322] 
	I0108 22:54:58.127671 1215935 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z7usit.937p2xfdwdd6p8dq \
	I0108 22:54:58.127678 1215935 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token z7usit.937p2xfdwdd6p8dq \
	I0108 22:54:58.127777 1215935 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 
	I0108 22:54:58.127785 1215935 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 
	I0108 22:54:58.131377 1215935 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 22:54:58.131406 1215935 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 22:54:58.131538 1215935 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:54:58.131552 1215935 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:54:58.131566 1215935 cni.go:84] Creating CNI manager for ""
	I0108 22:54:58.131576 1215935 cni.go:136] 1 nodes found, recommending kindnet
	I0108 22:54:58.135266 1215935 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 22:54:58.137529 1215935 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 22:54:58.150687 1215935 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 22:54:58.150709 1215935 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0108 22:54:58.150717 1215935 command_runner.go:130] > Device: 3ah/58d	Inode: 1051177     Links: 1
	I0108 22:54:58.150725 1215935 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 22:54:58.150731 1215935 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0108 22:54:58.150738 1215935 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0108 22:54:58.150744 1215935 command_runner.go:130] > Change: 2024-01-08 22:30:54.582539572 +0000
	I0108 22:54:58.150750 1215935 command_runner.go:130] >  Birth: 2024-01-08 22:30:54.538539869 +0000
	I0108 22:54:58.151430 1215935 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 22:54:58.151446 1215935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 22:54:58.202382 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 22:54:59.138982 1215935 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 22:54:59.146213 1215935 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 22:54:59.157579 1215935 command_runner.go:130] > serviceaccount/kindnet created
	I0108 22:54:59.170017 1215935 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 22:54:59.175635 1215935 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:54:59.175784 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-265402 minikube.k8s.io/updated_at=2024_01_08T22_54_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:54:59.175786 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:54:59.303483 1215935 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 22:54:59.307877 1215935 command_runner.go:130] > -16
	I0108 22:54:59.354101 1215935 command_runner.go:130] > node/multinode-265402 labeled
	I0108 22:54:59.357919 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:54:59.357972 1215935 ops.go:34] apiserver oom_adj: -16
	I0108 22:54:59.477945 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:54:59.858187 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:54:59.952203 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:00.358927 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:00.476205 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:00.858768 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:00.960438 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:01.357988 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:01.452688 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:01.858216 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:01.946114 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:02.358629 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:02.456326 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:02.858877 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:02.957174 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:03.358819 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:03.449446 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:03.858264 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:03.952537 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:04.357998 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:04.451420 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:04.858066 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:04.949803 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:05.358029 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:05.447647 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:05.858172 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:05.947542 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:06.358578 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:06.453229 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:06.858876 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:06.964053 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:07.358705 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:07.451645 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:07.858062 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:07.961363 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:08.358429 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:08.455385 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:08.858316 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:08.960084 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:09.358746 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:09.455458 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:09.857981 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:09.960181 1215935 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 22:55:10.358817 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:10.449989 1215935 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 22:55:10.450013 1215935 command_runner.go:130] > default   0         0s
	I0108 22:55:10.453527 1215935 kubeadm.go:1088] duration metric: took 11.277813698s to wait for elevateKubeSystemPrivileges.
	I0108 22:55:10.453554 1215935 kubeadm.go:406] StartCluster complete in 29.216505889s
	I0108 22:55:10.453571 1215935 settings.go:142] acquiring lock: {Name:mk4ee991c68e71724ae577ac1a9a811b1b4e899c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:10.453634 1215935 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:55:10.454330 1215935 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-1146913/kubeconfig: {Name:mk4903c0deda408cf5380ebed8399fb64deac655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:10.454814 1215935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:55:10.455080 1215935 kapi.go:59] client config for multinode-265402: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:55:10.455408 1215935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:55:10.455671 1215935 config.go:182] Loaded profile config "multinode-265402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:55:10.455707 1215935 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 22:55:10.455768 1215935 addons.go:69] Setting storage-provisioner=true in profile "multinode-265402"
	I0108 22:55:10.455781 1215935 addons.go:237] Setting addon storage-provisioner=true in "multinode-265402"
	I0108 22:55:10.455840 1215935 host.go:66] Checking if "multinode-265402" exists ...
	I0108 22:55:10.455969 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 22:55:10.455980 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:10.455988 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:10.455996 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:10.456218 1215935 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 22:55:10.456260 1215935 addons.go:69] Setting default-storageclass=true in profile "multinode-265402"
	I0108 22:55:10.456278 1215935 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-265402"
	I0108 22:55:10.456327 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 22:55:10.456535 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 22:55:10.508907 1215935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:55:10.509302 1215935 kapi.go:59] client config for multinode-265402: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:55:10.509571 1215935 addons.go:237] Setting addon default-storageclass=true in "multinode-265402"
	I0108 22:55:10.509600 1215935 host.go:66] Checking if "multinode-265402" exists ...
	I0108 22:55:10.512175 1215935 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:55:10.510051 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 22:55:10.514382 1215935 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:55:10.514400 1215935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:55:10.514463 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:55:10.530260 1215935 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0108 22:55:10.530282 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:10.530291 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:10.530299 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:10.530305 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:10.530312 1215935 round_trippers.go:580]     Content-Length: 291
	I0108 22:55:10.530318 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:10 GMT
	I0108 22:55:10.530324 1215935 round_trippers.go:580]     Audit-Id: 131a82aa-dec5-4fa1-9431-439b2636899f
	I0108 22:55:10.530330 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:10.530359 1215935 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2016178b-5f38-485d-8a66-5b0370c26d64","resourceVersion":"237","creationTimestamp":"2024-01-08T22:54:57Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 22:55:10.530800 1215935 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2016178b-5f38-485d-8a66-5b0370c26d64","resourceVersion":"237","creationTimestamp":"2024-01-08T22:54:57Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 22:55:10.530849 1215935 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 22:55:10.530856 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:10.530863 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:10.530869 1215935 round_trippers.go:473]     Content-Type: application/json
	I0108 22:55:10.530887 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:10.541535 1215935 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:55:10.541554 1215935 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:55:10.541622 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:55:10.573343 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:55:10.578452 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:55:10.583890 1215935 round_trippers.go:574] Response Status: 409 Conflict in 52 milliseconds
	I0108 22:55:10.583909 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:10.583917 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:10 GMT
	I0108 22:55:10.583924 1215935 round_trippers.go:580]     Audit-Id: cc8ba91b-1419-4995-91d4-64baa82218b1
	I0108 22:55:10.583931 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:10.583937 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:10.583943 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:10.583949 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:10.583956 1215935 round_trippers.go:580]     Content-Length: 332
	I0108 22:55:10.601613 1215935 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again","reason":"Conflict","details":{"name":"coredns","group":"apps","kind":"deployments"},"code":409}
	W0108 22:55:10.601840 1215935 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "multinode-265402" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0108 22:55:10.601856 1215935 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0108 22:55:10.601879 1215935 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 22:55:10.604110 1215935 out.go:177] * Verifying Kubernetes components...
	I0108 22:55:10.606100 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:55:10.691004 1215935 command_runner.go:130] > apiVersion: v1
	I0108 22:55:10.691065 1215935 command_runner.go:130] > data:
	I0108 22:55:10.691084 1215935 command_runner.go:130] >   Corefile: |
	I0108 22:55:10.691101 1215935 command_runner.go:130] >     .:53 {
	I0108 22:55:10.691119 1215935 command_runner.go:130] >         errors
	I0108 22:55:10.691151 1215935 command_runner.go:130] >         health {
	I0108 22:55:10.691176 1215935 command_runner.go:130] >            lameduck 5s
	I0108 22:55:10.691197 1215935 command_runner.go:130] >         }
	I0108 22:55:10.691217 1215935 command_runner.go:130] >         ready
	I0108 22:55:10.691251 1215935 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 22:55:10.691271 1215935 command_runner.go:130] >            pods insecure
	I0108 22:55:10.691291 1215935 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 22:55:10.691308 1215935 command_runner.go:130] >            ttl 30
	I0108 22:55:10.691327 1215935 command_runner.go:130] >         }
	I0108 22:55:10.691355 1215935 command_runner.go:130] >         prometheus :9153
	I0108 22:55:10.691380 1215935 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 22:55:10.691401 1215935 command_runner.go:130] >            max_concurrent 1000
	I0108 22:55:10.691423 1215935 command_runner.go:130] >         }
	I0108 22:55:10.691441 1215935 command_runner.go:130] >         cache 30
	I0108 22:55:10.691476 1215935 command_runner.go:130] >         loop
	I0108 22:55:10.691494 1215935 command_runner.go:130] >         reload
	I0108 22:55:10.691513 1215935 command_runner.go:130] >         loadbalance
	I0108 22:55:10.691533 1215935 command_runner.go:130] >     }
	I0108 22:55:10.691560 1215935 command_runner.go:130] > kind: ConfigMap
	I0108 22:55:10.691581 1215935 command_runner.go:130] > metadata:
	I0108 22:55:10.691611 1215935 command_runner.go:130] >   creationTimestamp: "2024-01-08T22:54:57Z"
	I0108 22:55:10.691630 1215935 command_runner.go:130] >   name: coredns
	I0108 22:55:10.691657 1215935 command_runner.go:130] >   namespace: kube-system
	I0108 22:55:10.691678 1215935 command_runner.go:130] >   resourceVersion: "233"
	I0108 22:55:10.691696 1215935 command_runner.go:130] >   uid: a033cc71-eaed-455b-8c3e-2487e1b31106
	I0108 22:55:10.694727 1215935 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:55:10.695133 1215935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:55:10.695401 1215935 kapi.go:59] client config for multinode-265402: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:55:10.695718 1215935 node_ready.go:35] waiting up to 6m0s for node "multinode-265402" to be "Ready" ...
	I0108 22:55:10.695812 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:10.695823 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:10.695832 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:10.695839 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:10.731826 1215935 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0108 22:55:10.731919 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:10.731951 1215935 round_trippers.go:580]     Audit-Id: 7176ef7b-fc5c-4360-91cb-3791fba112d2
	I0108 22:55:10.731990 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:10.732040 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:10.732064 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:10.732092 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:10.732128 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:10 GMT
	I0108 22:55:10.744904 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:10.778827 1215935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:55:10.801071 1215935 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:55:11.196296 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:11.196362 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:11.196386 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:11.196409 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:11.271764 1215935 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0108 22:55:11.271842 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:11.271930 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:11.271956 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:11.271975 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:11 GMT
	I0108 22:55:11.271996 1215935 round_trippers.go:580]     Audit-Id: 7d877d89-7075-4849-b133-fbab4a6b6d94
	I0108 22:55:11.272015 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:11.272042 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:11.272276 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:11.411801 1215935 command_runner.go:130] > configmap/coredns replaced
	I0108 22:55:11.417098 1215935 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0108 22:55:11.513798 1215935 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 22:55:11.530782 1215935 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 22:55:11.563170 1215935 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 22:55:11.582836 1215935 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 22:55:11.618494 1215935 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 22:55:11.641954 1215935 command_runner.go:130] > pod/storage-provisioner created
	I0108 22:55:11.643417 1215935 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 22:55:11.643594 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 22:55:11.643622 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:11.643644 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:11.643666 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:11.650486 1215935 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 22:55:11.650574 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:11.650632 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:11.650654 1215935 round_trippers.go:580]     Content-Length: 1273
	I0108 22:55:11.650677 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:11 GMT
	I0108 22:55:11.650709 1215935 round_trippers.go:580]     Audit-Id: 254b52d8-18e0-4766-8c37-f92237a255aa
	I0108 22:55:11.650739 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:11.650762 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:11.650784 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:11.651105 1215935 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"357"},"items":[{"metadata":{"name":"standard","uid":"eb37dc62-deea-4cb9-8a1d-9fdc3e0b7d5e","resourceVersion":"346","creationTimestamp":"2024-01-08T22:55:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T22:55:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 22:55:11.651525 1215935 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eb37dc62-deea-4cb9-8a1d-9fdc3e0b7d5e","resourceVersion":"346","creationTimestamp":"2024-01-08T22:55:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T22:55:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 22:55:11.651600 1215935 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 22:55:11.651623 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:11.651658 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:11.651685 1215935 round_trippers.go:473]     Content-Type: application/json
	I0108 22:55:11.651707 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:11.661748 1215935 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 22:55:11.661810 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:11.661833 1215935 round_trippers.go:580]     Audit-Id: fb6082a7-e603-4b77-8d33-c4e5bf8594a2
	I0108 22:55:11.661864 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:11.661896 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:11.661922 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:11.661944 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:11.661963 1215935 round_trippers.go:580]     Content-Length: 1220
	I0108 22:55:11.661982 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:11 GMT
	I0108 22:55:11.662084 1215935 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eb37dc62-deea-4cb9-8a1d-9fdc3e0b7d5e","resourceVersion":"346","creationTimestamp":"2024-01-08T22:55:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T22:55:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 22:55:11.664858 1215935 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 22:55:11.667012 1215935 addons.go:508] enable addons completed in 1.211298808s: enabled=[storage-provisioner default-storageclass]
	I0108 22:55:11.696475 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:11.696545 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:11.696568 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:11.696590 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:11.702338 1215935 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 22:55:11.702437 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:11.702465 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:11.702502 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:11.702522 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:11.702544 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:11.702575 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:11 GMT
	I0108 22:55:11.702598 1215935 round_trippers.go:580]     Audit-Id: 08ea8b03-4d72-4d1f-b293-f9eddb7229e5
	I0108 22:55:11.702801 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:12.196216 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:12.196284 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:12.196300 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:12.196308 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:12.199695 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:12.199757 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:12.199779 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:12.199801 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:12.199836 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:12.199863 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:12 GMT
	I0108 22:55:12.199883 1215935 round_trippers.go:580]     Audit-Id: 063b56a6-b962-4c03-b3f3-329b1bc408a8
	I0108 22:55:12.199904 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:12.200045 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:12.696559 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:12.696659 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:12.696719 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:12.696734 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:12.699420 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:12.699447 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:12.699456 1215935 round_trippers.go:580]     Audit-Id: 13315701-7988-4b20-bc8e-1e5dcf8357c1
	I0108 22:55:12.699463 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:12.699469 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:12.699475 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:12.699492 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:12.699499 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:12 GMT
	I0108 22:55:12.699661 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:12.700095 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:13.196856 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:13.196882 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:13.196892 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:13.196899 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:13.199545 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:13.199570 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:13.199579 1215935 round_trippers.go:580]     Audit-Id: 87e56fee-76a4-4f74-83c7-ef84dbf6d0e9
	I0108 22:55:13.199586 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:13.199611 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:13.199625 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:13.199638 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:13.199645 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:13 GMT
	I0108 22:55:13.199788 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:13.696786 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:13.696809 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:13.696821 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:13.696828 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:13.699491 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:13.699514 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:13.699523 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:13 GMT
	I0108 22:55:13.699530 1215935 round_trippers.go:580]     Audit-Id: fb08c92f-d48e-4650-9113-bf6d0e783a43
	I0108 22:55:13.699537 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:13.699547 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:13.699554 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:13.699561 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:13.699807 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:14.196241 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:14.196270 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:14.196281 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:14.196289 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:14.198982 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:14.199009 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:14.199018 1215935 round_trippers.go:580]     Audit-Id: da4c07b4-d112-4908-b693-0872192a1cc6
	I0108 22:55:14.199025 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:14.199031 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:14.199038 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:14.199047 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:14.199054 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:14 GMT
	I0108 22:55:14.199203 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:14.696822 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:14.696851 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:14.696862 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:14.696869 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:14.699305 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:14.699338 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:14.699347 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:14 GMT
	I0108 22:55:14.699354 1215935 round_trippers.go:580]     Audit-Id: 6738402e-8e30-4aa9-a19e-90c34a37c837
	I0108 22:55:14.699360 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:14.699366 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:14.699373 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:14.699381 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:14.699564 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:15.196828 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:15.196861 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:15.196873 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:15.196880 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:15.199680 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:15.199704 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:15.199714 1215935 round_trippers.go:580]     Audit-Id: 4049eea2-a4a3-42f6-9e95-fc18f23415a4
	I0108 22:55:15.199728 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:15.199735 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:15.199742 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:15.199748 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:15.199757 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:15 GMT
	I0108 22:55:15.199882 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:15.200287 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:15.695970 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:15.695991 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:15.696001 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:15.696008 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:15.698494 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:15.698522 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:15.698531 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:15 GMT
	I0108 22:55:15.698538 1215935 round_trippers.go:580]     Audit-Id: 0310ad7e-fe9e-434e-95c7-442a418f45b9
	I0108 22:55:15.698545 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:15.698551 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:15.698558 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:15.698568 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:15.698793 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:16.196349 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:16.196379 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:16.196390 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:16.196396 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:16.199681 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:16.199748 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:16.199771 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:16 GMT
	I0108 22:55:16.199793 1215935 round_trippers.go:580]     Audit-Id: 2ee9f0cf-2ab7-4710-80a6-4472c9aa4bf9
	I0108 22:55:16.199813 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:16.199839 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:16.199860 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:16.199877 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:16.200000 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:16.696581 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:16.696610 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:16.696620 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:16.696627 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:16.699199 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:16.699219 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:16.699229 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:16.699236 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:16.699243 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:16 GMT
	I0108 22:55:16.699252 1215935 round_trippers.go:580]     Audit-Id: cb7c631e-a241-4fca-895a-c6bea8888da1
	I0108 22:55:16.699262 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:16.699268 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:16.699411 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:17.196864 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:17.196896 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:17.196907 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:17.196914 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:17.199442 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:17.199463 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:17.199471 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:17.199478 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:17 GMT
	I0108 22:55:17.199484 1215935 round_trippers.go:580]     Audit-Id: 0de51f6b-2da2-4f1b-967e-5d2dd215964d
	I0108 22:55:17.199490 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:17.199496 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:17.199502 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:17.199659 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:17.696870 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:17.696896 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:17.696906 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:17.696913 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:17.699531 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:17.699559 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:17.699567 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:17.699574 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:17.699581 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:17.699588 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:17.699599 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:17 GMT
	I0108 22:55:17.699609 1215935 round_trippers.go:580]     Audit-Id: b199877a-fae7-48a4-894b-2c8180ed953f
	I0108 22:55:17.699827 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:17.700229 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:18.196788 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:18.196812 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:18.196822 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:18.196830 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:18.199292 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:18.199318 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:18.199327 1215935 round_trippers.go:580]     Audit-Id: d6636970-dbad-4f3e-8cde-1e7719959728
	I0108 22:55:18.199334 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:18.199353 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:18.199363 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:18.199372 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:18.199379 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:18 GMT
	I0108 22:55:18.199730 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:18.696010 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:18.696037 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:18.696047 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:18.696057 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:18.699061 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:18.699085 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:18.699094 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:18.699101 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:18.699108 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:18.699114 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:18 GMT
	I0108 22:55:18.699121 1215935 round_trippers.go:580]     Audit-Id: 0d585d00-d5dd-4d59-9efa-520da73f459d
	I0108 22:55:18.699139 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:18.699482 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:19.195983 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:19.196011 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:19.196022 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:19.196029 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:19.199299 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:19.199320 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:19.199329 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:19.199336 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:19.199342 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:19.199348 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:19.199367 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:19 GMT
	I0108 22:55:19.199373 1215935 round_trippers.go:580]     Audit-Id: a021d217-88ce-497c-b8b9-c06d2209bf38
	I0108 22:55:19.199893 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:19.696218 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:19.696247 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:19.696257 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:19.696265 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:19.698769 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:19.698797 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:19.698807 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:19.698814 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:19.698820 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:19.698827 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:19 GMT
	I0108 22:55:19.698834 1215935 round_trippers.go:580]     Audit-Id: 6a66fdc6-1547-4b91-b008-ed058c05f2af
	I0108 22:55:19.698845 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:19.699303 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:20.196128 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:20.196155 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:20.196166 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:20.196174 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:20.198870 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:20.198893 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:20.198905 1215935 round_trippers.go:580]     Audit-Id: ff85cd56-18ba-4f52-8d41-9d6f2e88c4ea
	I0108 22:55:20.198911 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:20.198917 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:20.198924 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:20.198931 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:20.198937 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:20 GMT
	I0108 22:55:20.199213 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:20.199621 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:20.696511 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:20.696537 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:20.696546 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:20.696553 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:20.699118 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:20.699149 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:20.699158 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:20 GMT
	I0108 22:55:20.699164 1215935 round_trippers.go:580]     Audit-Id: 1f128dd9-3b01-4f46-a0f7-772ad79d8f7f
	I0108 22:55:20.699170 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:20.699176 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:20.699182 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:20.699189 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:20.699389 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:21.196566 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:21.196592 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:21.196603 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:21.196610 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:21.199323 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:21.199348 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:21.199356 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:21.199367 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:21.199373 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:21.199379 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:21.199387 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:21 GMT
	I0108 22:55:21.199395 1215935 round_trippers.go:580]     Audit-Id: 4b8e0db5-4155-411f-96a0-a33eb6679438
	I0108 22:55:21.199585 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:21.695951 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:21.695978 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:21.695988 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:21.695995 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:21.698373 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:21.698396 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:21.698405 1215935 round_trippers.go:580]     Audit-Id: 7db59788-2f86-462f-91ad-ac8d0348576b
	I0108 22:55:21.698412 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:21.698418 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:21.698424 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:21.698431 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:21.698437 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:21 GMT
	I0108 22:55:21.698920 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:22.196815 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:22.196845 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:22.196859 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:22.196867 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:22.200141 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:22.200171 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:22.200180 1215935 round_trippers.go:580]     Audit-Id: 69810ef8-7983-44c4-b6dc-7e930721a633
	I0108 22:55:22.200187 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:22.200193 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:22.200199 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:22.200206 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:22.200212 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:22 GMT
	I0108 22:55:22.200344 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:22.200738 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:22.696554 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:22.696581 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:22.696591 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:22.696601 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:22.699404 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:22.699432 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:22.699441 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:22.699448 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:22.699455 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:22.699462 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:22 GMT
	I0108 22:55:22.699472 1215935 round_trippers.go:580]     Audit-Id: c6703803-d8b7-45d3-8df5-04ce4b52623e
	I0108 22:55:22.699479 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:22.699613 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:23.196873 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:23.196897 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:23.196907 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:23.196914 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:23.199405 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:23.199429 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:23.199438 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:23.199445 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:23 GMT
	I0108 22:55:23.199451 1215935 round_trippers.go:580]     Audit-Id: 3b4d61c6-6e1a-4ec6-ad19-d55f11b82599
	I0108 22:55:23.199458 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:23.199467 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:23.199474 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:23.199846 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:23.695977 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:23.696002 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:23.696013 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:23.696020 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:23.698703 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:23.698729 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:23.698738 1215935 round_trippers.go:580]     Audit-Id: e31b540a-b4f8-459a-8831-fa16caaadac9
	I0108 22:55:23.698744 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:23.698750 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:23.698756 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:23.698762 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:23.698768 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:23 GMT
	I0108 22:55:23.698913 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:24.196583 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:24.196608 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:24.196619 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:24.196626 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:24.199243 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:24.199266 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:24.199275 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:24.199281 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:24.199287 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:24 GMT
	I0108 22:55:24.199294 1215935 round_trippers.go:580]     Audit-Id: c7eb518f-6ee4-4b28-b81b-0eb11f8c9767
	I0108 22:55:24.199300 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:24.199310 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:24.199445 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:24.696467 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:24.696495 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:24.696506 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:24.696514 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:24.698959 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:24.698989 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:24.698999 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:24.699006 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:24.699013 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:24.699030 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:24 GMT
	I0108 22:55:24.699040 1215935 round_trippers.go:580]     Audit-Id: bcabd51a-c4e9-47c5-972a-aae385a0cde0
	I0108 22:55:24.699046 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:24.699290 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:24.699694 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:25.196873 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:25.196900 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:25.196909 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:25.196918 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:25.199497 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:25.199519 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:25.199528 1215935 round_trippers.go:580]     Audit-Id: 42870bcf-8d0e-4bc4-b5dc-cc5616236427
	I0108 22:55:25.199535 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:25.199541 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:25.199547 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:25.199553 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:25.199560 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:25 GMT
	I0108 22:55:25.199694 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:25.696529 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:25.696552 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:25.696562 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:25.696569 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:25.699241 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:25.699268 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:25.699277 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:25.699284 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:25.699290 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:25.699300 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:25.699310 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:25 GMT
	I0108 22:55:25.699316 1215935 round_trippers.go:580]     Audit-Id: 28ea872f-bf5d-4e14-a64f-f08c0525a929
	I0108 22:55:25.699465 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:26.196622 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:26.196649 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:26.196661 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:26.196669 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:26.199352 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:26.199383 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:26.199393 1215935 round_trippers.go:580]     Audit-Id: e3c5b4b1-aab1-4aa7-97bf-8771ce0300af
	I0108 22:55:26.199400 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:26.199407 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:26.199413 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:26.199420 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:26.199427 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:26 GMT
	I0108 22:55:26.199533 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:26.696672 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:26.696697 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:26.696707 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:26.696718 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:26.699243 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:26.699271 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:26.699280 1215935 round_trippers.go:580]     Audit-Id: 69e98402-72ea-4eba-991e-b5d08e2d28f0
	I0108 22:55:26.699298 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:26.699305 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:26.699311 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:26.699317 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:26.699324 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:26 GMT
	I0108 22:55:26.699728 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:26.700133 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:27.196680 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:27.196706 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:27.196717 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:27.196724 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:27.199490 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:27.199512 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:27.199520 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:27.199527 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:27.199533 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:27.199539 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:27.199546 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:27 GMT
	I0108 22:55:27.199552 1215935 round_trippers.go:580]     Audit-Id: 48c964a0-6c05-47f7-8c34-7f9673dcc02a
	I0108 22:55:27.199724 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:27.696481 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:27.696509 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:27.696518 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:27.696525 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:27.699209 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:27.699232 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:27.699241 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:27 GMT
	I0108 22:55:27.699247 1215935 round_trippers.go:580]     Audit-Id: 9c1727dd-8cef-4684-bbde-c8914b4448f1
	I0108 22:55:27.699253 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:27.699259 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:27.699265 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:27.699272 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:27.699421 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:28.196116 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:28.196141 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:28.196152 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:28.196160 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:28.198872 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:28.198896 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:28.198905 1215935 round_trippers.go:580]     Audit-Id: b2d3e7cb-c73f-4468-8eb1-478c26922b32
	I0108 22:55:28.198918 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:28.198925 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:28.198941 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:28.198953 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:28.198959 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:28 GMT
	I0108 22:55:28.199225 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:28.696827 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:28.696856 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:28.696868 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:28.696875 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:28.699462 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:28.699487 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:28.699495 1215935 round_trippers.go:580]     Audit-Id: 6269d2fa-b41c-4f33-8dc5-1c26bbb75d3e
	I0108 22:55:28.699502 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:28.699508 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:28.699514 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:28.699521 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:28.699527 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:28 GMT
	I0108 22:55:28.699865 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:28.700270 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:29.195970 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:29.195996 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:29.196006 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:29.196013 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:29.198619 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:29.198639 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:29.198648 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:29.198654 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:29.198661 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:29 GMT
	I0108 22:55:29.198667 1215935 round_trippers.go:580]     Audit-Id: 0a365e6f-06ef-44f8-9222-50411480d2d6
	I0108 22:55:29.198673 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:29.198679 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:29.198835 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:29.696500 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:29.696525 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:29.696535 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:29.696543 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:29.699153 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:29.699174 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:29.699183 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:29.699190 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:29.699196 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:29.699204 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:29 GMT
	I0108 22:55:29.699210 1215935 round_trippers.go:580]     Audit-Id: 6343f378-e0b3-4c7b-b071-1803c56dc39b
	I0108 22:55:29.699216 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:29.699335 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:30.195943 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:30.195977 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:30.195988 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:30.195996 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:30.198748 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:30.198772 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:30.198781 1215935 round_trippers.go:580]     Audit-Id: 7e17e8bc-9863-4f26-af07-0389fc7cc5ab
	I0108 22:55:30.198788 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:30.198795 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:30.198801 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:30.198808 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:30.198814 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:30 GMT
	I0108 22:55:30.199314 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:30.696599 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:30.696665 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:30.696690 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:30.696714 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:30.699456 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:30.699484 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:30.699493 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:30.699500 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:30.699507 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:30.699513 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:30 GMT
	I0108 22:55:30.699519 1215935 round_trippers.go:580]     Audit-Id: 9c6ca9f0-85af-41c2-bfcc-bc6440275cd6
	I0108 22:55:30.699526 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:30.699679 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:31.196841 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:31.196867 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:31.196878 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:31.196887 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:31.199762 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:31.199791 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:31.199801 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:31 GMT
	I0108 22:55:31.199808 1215935 round_trippers.go:580]     Audit-Id: f68a894b-dd20-4cc6-aeae-e82813ebac29
	I0108 22:55:31.199818 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:31.199825 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:31.199836 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:31.199843 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:31.200050 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:31.200449 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:31.696628 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:31.696651 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:31.696662 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:31.696670 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:31.699288 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:31.699315 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:31.699324 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:31.699330 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:31.699337 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:31.699343 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:31.699350 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:31 GMT
	I0108 22:55:31.699361 1215935 round_trippers.go:580]     Audit-Id: 9271ea46-33ed-4201-9255-12a1d85bbcf1
	I0108 22:55:31.699472 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:32.196796 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:32.196821 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:32.196832 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:32.196839 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:32.199982 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:32.200007 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:32.200016 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:32.200023 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:32.200029 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:32.200036 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:32 GMT
	I0108 22:55:32.200043 1215935 round_trippers.go:580]     Audit-Id: 8cdcbfcb-047a-4c2f-837f-07f93e6616b9
	I0108 22:55:32.200049 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:32.200166 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:32.696456 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:32.696481 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:32.696490 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:32.696497 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:32.699336 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:32.699361 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:32.699370 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:32.699387 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:32.699393 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:32 GMT
	I0108 22:55:32.699399 1215935 round_trippers.go:580]     Audit-Id: 3ffbce27-4a63-439f-8419-f0f4016738d5
	I0108 22:55:32.699405 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:32.699412 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:32.699575 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:33.196734 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:33.196760 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:33.196771 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:33.196778 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:33.199380 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:33.199440 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:33.199449 1215935 round_trippers.go:580]     Audit-Id: f45fd01f-3e43-4024-ac48-d00a02258701
	I0108 22:55:33.199456 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:33.199462 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:33.199468 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:33.199475 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:33.199482 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:33 GMT
	I0108 22:55:33.199617 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:33.696786 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:33.696810 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:33.696819 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:33.696827 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:33.699355 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:33.699389 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:33.699399 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:33.699406 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:33 GMT
	I0108 22:55:33.699413 1215935 round_trippers.go:580]     Audit-Id: af18bef7-44ce-4657-a6d1-e37f8bcd1cc5
	I0108 22:55:33.699420 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:33.699430 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:33.699439 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:33.699743 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:33.700140 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:34.196768 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:34.196794 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:34.196804 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:34.196811 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:34.199313 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:34.199336 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:34.199348 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:34.199354 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:34.199360 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:34.199366 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:34.199373 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:34 GMT
	I0108 22:55:34.199382 1215935 round_trippers.go:580]     Audit-Id: 6b84c88e-5b5d-463b-a319-34462d3f7e50
	I0108 22:55:34.199532 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:34.696642 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:34.696663 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:34.696673 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:34.696680 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:34.699106 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:34.699127 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:34.699135 1215935 round_trippers.go:580]     Audit-Id: 2e1a8534-03f6-46aa-88ad-e22ca02014d7
	I0108 22:55:34.699142 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:34.699148 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:34.699154 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:34.699160 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:34.699167 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:34 GMT
	I0108 22:55:34.699329 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:35.196190 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:35.196216 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:35.196226 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:35.196233 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:35.198730 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:35.198761 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:35.198771 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:35 GMT
	I0108 22:55:35.198777 1215935 round_trippers.go:580]     Audit-Id: 78917c1a-e3c0-40d5-81ad-1aaa292ef41c
	I0108 22:55:35.198783 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:35.198790 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:35.198796 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:35.198809 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:35.198926 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:35.696587 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:35.696612 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:35.696621 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:35.696629 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:35.698974 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:35.698997 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:35.699006 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:35.699012 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:35.699019 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:35 GMT
	I0108 22:55:35.699026 1215935 round_trippers.go:580]     Audit-Id: 4fd90692-61b9-498e-96a4-9b459e0532f7
	I0108 22:55:35.699032 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:35.699043 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:35.699186 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:36.195974 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:36.196000 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:36.196011 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:36.196019 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:36.198584 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:36.198610 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:36.198619 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:36.198626 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:36.198632 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:36.198642 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:36.198656 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:36 GMT
	I0108 22:55:36.198667 1215935 round_trippers.go:580]     Audit-Id: 211e0ff6-6208-4111-b25c-49ef1e6af170
	I0108 22:55:36.198831 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:36.199244 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:36.696552 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:36.696577 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:36.696604 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:36.696613 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:36.699163 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:36.699194 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:36.699204 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:36.699213 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:36 GMT
	I0108 22:55:36.699230 1215935 round_trippers.go:580]     Audit-Id: 4ced43c0-8839-443f-980d-f656092707ec
	I0108 22:55:36.699236 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:36.699243 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:36.699253 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:36.699492 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:37.196263 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:37.196289 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:37.196300 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:37.196307 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:37.198850 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:37.198869 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:37.198877 1215935 round_trippers.go:580]     Audit-Id: fa79cee2-41d9-4dfa-9be9-5a21c5b57849
	I0108 22:55:37.198884 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:37.198890 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:37.198896 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:37.198903 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:37.198909 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:37 GMT
	I0108 22:55:37.199028 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:37.695972 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:37.696000 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:37.696010 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:37.696017 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:37.698534 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:37.698561 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:37.698570 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:37.698582 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:37.698589 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:37.698596 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:37 GMT
	I0108 22:55:37.698602 1215935 round_trippers.go:580]     Audit-Id: b66d6b7a-f080-4591-9bd8-8f3e048b9b45
	I0108 22:55:37.698612 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:37.698810 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:38.196022 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:38.196048 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:38.196058 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:38.196065 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:38.198606 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:38.198633 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:38.198643 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:38.198650 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:38.198659 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:38.198666 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:38 GMT
	I0108 22:55:38.198672 1215935 round_trippers.go:580]     Audit-Id: 363b408e-9dca-4149-aae4-01f5dda3bf9b
	I0108 22:55:38.198682 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:38.198839 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:38.695893 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:38.695917 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:38.695927 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:38.695934 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:38.698485 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:38.698509 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:38.698518 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:38.698526 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:38 GMT
	I0108 22:55:38.698532 1215935 round_trippers.go:580]     Audit-Id: 55f2bb02-99dc-4b97-84b9-bac8f52e2afa
	I0108 22:55:38.698539 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:38.698551 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:38.698564 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:38.698910 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:38.699357 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:39.196612 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:39.196638 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:39.196648 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:39.196655 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:39.199177 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:39.199201 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:39.199210 1215935 round_trippers.go:580]     Audit-Id: 6f2beced-2099-4c50-b522-0837f6b78eed
	I0108 22:55:39.199216 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:39.199223 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:39.199229 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:39.199235 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:39.199242 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:39 GMT
	I0108 22:55:39.199354 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:39.696491 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:39.696516 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:39.696526 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:39.696533 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:39.699036 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:39.699059 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:39.699068 1215935 round_trippers.go:580]     Audit-Id: 537212d9-b03b-46c1-a5bf-c7eab6f78f5a
	I0108 22:55:39.699074 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:39.699080 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:39.699086 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:39.699092 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:39.699099 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:39 GMT
	I0108 22:55:39.699391 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:40.196206 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:40.196234 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:40.196245 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:40.196261 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:40.198998 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:40.199022 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:40.199030 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:40 GMT
	I0108 22:55:40.199037 1215935 round_trippers.go:580]     Audit-Id: 1491f12f-4b28-458b-b799-ae266b8abf5b
	I0108 22:55:40.199043 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:40.199050 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:40.199056 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:40.199071 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:40.199269 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:40.696720 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:40.696745 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:40.696755 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:40.696762 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:40.699270 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:40.699290 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:40.699298 1215935 round_trippers.go:580]     Audit-Id: 83aecc6e-975a-4860-adae-29d4c110654b
	I0108 22:55:40.699305 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:40.699311 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:40.699318 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:40.699324 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:40.699330 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:40 GMT
	I0108 22:55:40.699482 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:40.699875 1215935 node_ready.go:58] node "multinode-265402" has status "Ready":"False"
	I0108 22:55:41.196724 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:41.196752 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:41.196763 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:41.196770 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:41.199273 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:41.199300 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:41.199309 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:41.199315 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:41.199321 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:41.199328 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:41 GMT
	I0108 22:55:41.199334 1215935 round_trippers.go:580]     Audit-Id: 07a8e5a2-9bd3-4e46-90d8-bbde0e768c41
	I0108 22:55:41.199343 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:41.199473 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:41.696658 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:41.696683 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:41.696693 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:41.696701 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:41.699227 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:41.699254 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:41.699263 1215935 round_trippers.go:580]     Audit-Id: 8c2ae4cc-a604-4ea3-a175-7fe7ff0a0b71
	I0108 22:55:41.699270 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:41.699277 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:41.699283 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:41.699289 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:41.699295 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:41 GMT
	I0108 22:55:41.699694 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:42.196746 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:42.196781 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:42.196794 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:42.196806 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:42.200712 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:42.200739 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:42.200749 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:42.200756 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:42.200763 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:42.200770 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:42 GMT
	I0108 22:55:42.200776 1215935 round_trippers.go:580]     Audit-Id: 39783eb0-1831-4b7c-80d8-cae5f06afb9f
	I0108 22:55:42.200783 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:42.201194 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"295","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0108 22:55:42.696415 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:42.696443 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:42.696453 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:42.696461 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:42.713615 1215935 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0108 22:55:42.713642 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:42.713651 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:42.713658 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:42 GMT
	I0108 22:55:42.713669 1215935 round_trippers.go:580]     Audit-Id: 31be313c-fe83-4969-a3bc-3e121d57db05
	I0108 22:55:42.713680 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:42.713690 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:42.713697 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:42.714271 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:42.714665 1215935 node_ready.go:49] node "multinode-265402" has status "Ready":"True"
	I0108 22:55:42.714686 1215935 node_ready.go:38] duration metric: took 32.018948641s waiting for node "multinode-265402" to be "Ready" ...
	I0108 22:55:42.714698 1215935 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:55:42.714769 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:55:42.714780 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:42.714788 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:42.714794 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:42.724830 1215935 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 22:55:42.724858 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:42.724867 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:42.724874 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:42.724881 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:42.724887 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:42 GMT
	I0108 22:55:42.724893 1215935 round_trippers.go:580]     Audit-Id: 705acc6a-255b-4a43-a41f-b7f2497b276b
	I0108 22:55:42.724899 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:42.726463 1215935 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"388"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dhbdf","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49fd4e2f-0617-4904-8f59-17192c16fa4f","resourceVersion":"386","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62677 chars]
	I0108 22:55:42.731230 1215935 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dhbdf" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:42.731343 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dhbdf
	I0108 22:55:42.731354 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:42.731363 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:42.731377 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:42.759362 1215935 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0108 22:55:42.759385 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:42.759393 1215935 round_trippers.go:580]     Audit-Id: 573fb560-56c9-4c43-adda-253d1eea6c6c
	I0108 22:55:42.759400 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:42.759406 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:42.759413 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:42.759419 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:42.759425 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:42 GMT
	I0108 22:55:42.775279 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dhbdf","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49fd4e2f-0617-4904-8f59-17192c16fa4f","resourceVersion":"386","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 22:55:42.775842 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:42.775861 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:42.775873 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:42.775881 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:42.786787 1215935 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 22:55:42.786809 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:42.786818 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:42.786824 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:42.786830 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:42.786836 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:42 GMT
	I0108 22:55:42.786843 1215935 round_trippers.go:580]     Audit-Id: 98791168-35f1-4ed5-87c5-2bcc2a9b7799
	I0108 22:55:42.786849 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:42.786990 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:43.232148 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dhbdf
	I0108 22:55:43.232188 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.232200 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.232207 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.234956 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.235011 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.235021 1215935 round_trippers.go:580]     Audit-Id: 83b719f6-6a8d-4e98-88dd-203233808106
	I0108 22:55:43.235028 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.235034 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.235040 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.235047 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.235053 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.235181 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dhbdf","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49fd4e2f-0617-4904-8f59-17192c16fa4f","resourceVersion":"386","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0108 22:55:43.235708 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:43.235728 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.235737 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.235744 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.238176 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.238196 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.238263 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.238275 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.238282 1215935 round_trippers.go:580]     Audit-Id: df35a437-0591-4150-9387-adf506829c09
	I0108 22:55:43.238289 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.238330 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.238370 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.238596 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:43.732067 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-dhbdf
	I0108 22:55:43.732162 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.732179 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.732187 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.735139 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.735160 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.735175 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.735183 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.735192 1215935 round_trippers.go:580]     Audit-Id: 277d7128-844f-4d3d-875e-a1dd432f67cf
	I0108 22:55:43.735198 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.735204 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.735210 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.735337 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-dhbdf","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49fd4e2f-0617-4904-8f59-17192c16fa4f","resourceVersion":"404","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 22:55:43.735934 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:43.735944 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.735953 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.735960 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.738227 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.738282 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.738306 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.738328 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.738363 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.738390 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.738404 1215935 round_trippers.go:580]     Audit-Id: 3142b8d8-625f-455c-9722-64f78ea740d8
	I0108 22:55:43.738411 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.738549 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:43.738931 1215935 pod_ready.go:92] pod "coredns-5dd5756b68-dhbdf" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:43.738949 1215935 pod_ready.go:81] duration metric: took 1.007688749s waiting for pod "coredns-5dd5756b68-dhbdf" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.738960 1215935 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jxvsh" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.739026 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jxvsh
	I0108 22:55:43.739036 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.739044 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.739051 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.741406 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.741465 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.741485 1215935 round_trippers.go:580]     Audit-Id: 6c094c5e-1f77-456a-a374-fda0c57a579c
	I0108 22:55:43.741505 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.741538 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.741563 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.741575 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.741582 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.741750 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jxvsh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e30c3eb4-3f0a-40da-b222-8987a1951271","resourceVersion":"399","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 22:55:43.742266 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:43.742283 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.742291 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.742298 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.744527 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.744580 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.744600 1215935 round_trippers.go:580]     Audit-Id: b4677516-d5be-4e71-a747-b02b12113b15
	I0108 22:55:43.744620 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.744654 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.744676 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.744694 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.744707 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.744857 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:43.745280 1215935 pod_ready.go:92] pod "coredns-5dd5756b68-jxvsh" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:43.745302 1215935 pod_ready.go:81] duration metric: took 6.33197ms waiting for pod "coredns-5dd5756b68-jxvsh" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.745347 1215935 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.745408 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-265402
	I0108 22:55:43.745418 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.745426 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.745433 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.747575 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.747595 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.747603 1215935 round_trippers.go:580]     Audit-Id: b01d807b-b3b8-45d9-b55d-e5c2d4108221
	I0108 22:55:43.747609 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.747616 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.747622 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.747628 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.747639 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.747952 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-265402","namespace":"kube-system","uid":"f0b50c9c-24ac-44c2-97d3-fd32c3fd1783","resourceVersion":"274","creationTimestamp":"2024-01-08T22:54:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9fbc9ea72760953590c9db956803870a","kubernetes.io/config.mirror":"9fbc9ea72760953590c9db956803870a","kubernetes.io/config.seen":"2024-01-08T22:54:49.639698825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 22:55:43.748410 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:43.748427 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.748436 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.748443 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.750591 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.750610 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.750618 1215935 round_trippers.go:580]     Audit-Id: e9d84ff3-7e26-4f88-b70a-41a549e30022
	I0108 22:55:43.750625 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.750631 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.750638 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.750645 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.750654 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.750810 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:43.751187 1215935 pod_ready.go:92] pod "etcd-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:43.751205 1215935 pod_ready.go:81] duration metric: took 5.851079ms waiting for pod "etcd-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.751220 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.751282 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-265402
	I0108 22:55:43.751291 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.751299 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.751306 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.753535 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.753581 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.753602 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.753623 1215935 round_trippers.go:580]     Audit-Id: f85a5ed9-c50b-442f-ba53-72a22b37ab95
	I0108 22:55:43.753656 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.753680 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.753699 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.753719 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.753854 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-265402","namespace":"kube-system","uid":"f89662f8-7aea-4d49-b0bc-369a5a93317e","resourceVersion":"267","creationTimestamp":"2024-01-08T22:54:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"153e2a2653c145307af823d6bdf14ecf","kubernetes.io/config.mirror":"153e2a2653c145307af823d6bdf14ecf","kubernetes.io/config.seen":"2024-01-08T22:54:49.639704773Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 22:55:43.754419 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:43.754435 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.754444 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.754451 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.756502 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.756547 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.756569 1215935 round_trippers.go:580]     Audit-Id: 93677e83-ced8-41b1-a110-d4973c7ea4b0
	I0108 22:55:43.756592 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.756627 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.756651 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.756671 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.756692 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.756806 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:43.757220 1215935 pod_ready.go:92] pod "kube-apiserver-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:43.757255 1215935 pod_ready.go:81] duration metric: took 6.023352ms waiting for pod "kube-apiserver-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.757266 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.757323 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-265402
	I0108 22:55:43.757333 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.757341 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.757347 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.759795 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.759847 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.759884 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.759909 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.759932 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.759966 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.759991 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.760011 1215935 round_trippers.go:580]     Audit-Id: 0046b4c6-7243-4c5d-ad25-3440b2104809
	I0108 22:55:43.760187 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-265402","namespace":"kube-system","uid":"bd5f7307-7f0b-4bd9-a40a-0c7adf289bc5","resourceVersion":"271","creationTimestamp":"2024-01-08T22:54:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ec989d2fba992978bd611606acd568a","kubernetes.io/config.mirror":"6ec989d2fba992978bd611606acd568a","kubernetes.io/config.seen":"2024-01-08T22:54:49.639706037Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 22:55:43.897015 1215935 request.go:629] Waited for 136.298269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:43.897095 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:43.897104 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:43.897114 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:43.897121 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:43.899615 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:43.899680 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:43.899703 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:43 GMT
	I0108 22:55:43.899727 1215935 round_trippers.go:580]     Audit-Id: 4e9f5755-39a9-4cd7-b24e-2e536a158bbb
	I0108 22:55:43.899763 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:43.899779 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:43.899788 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:43.899794 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:43.899910 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:43.900310 1215935 pod_ready.go:92] pod "kube-controller-manager-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:43.900330 1215935 pod_ready.go:81] duration metric: took 143.055786ms waiting for pod "kube-controller-manager-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:43.900343 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shpdw" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:44.096762 1215935 request.go:629] Waited for 196.35177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shpdw
	I0108 22:55:44.096872 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shpdw
	I0108 22:55:44.096886 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:44.096897 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:44.096904 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:44.099791 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:44.099897 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:44.099913 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:44.099920 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:44.099939 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:44.099953 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:44 GMT
	I0108 22:55:44.099963 1215935 round_trippers.go:580]     Audit-Id: 94cba0c2-1557-4432-bae0-ba9bc189e21f
	I0108 22:55:44.099969 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:44.100137 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-shpdw","generateName":"kube-proxy-","namespace":"kube-system","uid":"6d87b28d-e3f3-48e7-9d07-f96a102d9294","resourceVersion":"364","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d75724e6-2fba-4f4e-b9af-60383dc2d915","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75724e6-2fba-4f4e-b9af-60383dc2d915\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 22:55:44.297063 1215935 request.go:629] Waited for 196.366818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:44.297146 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:44.297159 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:44.297184 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:44.297198 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:44.299667 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:44.299731 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:44.299753 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:44.299772 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:44.299805 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:44.299819 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:44 GMT
	I0108 22:55:44.299826 1215935 round_trippers.go:580]     Audit-Id: 879e8e36-81e0-47c6-80f6-e155d8c7b62d
	I0108 22:55:44.299833 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:44.299948 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:44.300350 1215935 pod_ready.go:92] pod "kube-proxy-shpdw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:44.300372 1215935 pod_ready.go:81] duration metric: took 400.022112ms waiting for pod "kube-proxy-shpdw" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:44.300383 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:44.497177 1215935 request.go:629] Waited for 196.702323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265402
	I0108 22:55:44.497260 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265402
	I0108 22:55:44.497273 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:44.497284 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:44.497295 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:44.499715 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:44.499737 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:44.499746 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:44.499752 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:44.499759 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:44.499765 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:44 GMT
	I0108 22:55:44.499772 1215935 round_trippers.go:580]     Audit-Id: 1d09c637-2411-4771-975c-682144041540
	I0108 22:55:44.499778 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:44.499971 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-265402","namespace":"kube-system","uid":"8fd809b7-494a-4b6e-a556-cdb385c37788","resourceVersion":"275","creationTimestamp":"2024-01-08T22:54:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7755fe167392a0b43d8453d49a1480f3","kubernetes.io/config.mirror":"7755fe167392a0b43d8453d49a1480f3","kubernetes.io/config.seen":"2024-01-08T22:54:58.076778153Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 22:55:44.696743 1215935 request.go:629] Waited for 196.345747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:44.696821 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:55:44.696832 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:44.696842 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:44.696871 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:44.699448 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:44.699474 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:44.699483 1215935 round_trippers.go:580]     Audit-Id: 1d857010-e3e9-4919-8f17-b6865bf9c569
	I0108 22:55:44.699490 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:44.699496 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:44.699502 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:44.699508 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:44.699515 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:44 GMT
	I0108 22:55:44.699622 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:55:44.700022 1215935 pod_ready.go:92] pod "kube-scheduler-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:44.700048 1215935 pod_ready.go:81] duration metric: took 399.65747ms waiting for pod "kube-scheduler-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:44.700075 1215935 pod_ready.go:38] duration metric: took 1.985355719s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:55:44.700096 1215935 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:55:44.700160 1215935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:55:44.711866 1215935 command_runner.go:130] > 1257
	I0108 22:55:44.713393 1215935 api_server.go:72] duration metric: took 34.1114872s to wait for apiserver process to appear ...
	I0108 22:55:44.713456 1215935 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:55:44.713489 1215935 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 22:55:44.723249 1215935 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 22:55:44.723323 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0108 22:55:44.723329 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:44.723338 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:44.723347 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:44.724568 1215935 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 22:55:44.724594 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:44.724608 1215935 round_trippers.go:580]     Audit-Id: 1e3b8add-f353-475d-b242-e69ca7a01b2c
	I0108 22:55:44.724620 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:44.724626 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:44.724633 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:44.724639 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:44.724646 1215935 round_trippers.go:580]     Content-Length: 264
	I0108 22:55:44.724652 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:44 GMT
	I0108 22:55:44.724679 1215935 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0108 22:55:44.724779 1215935 api_server.go:141] control plane version: v1.28.4
	I0108 22:55:44.724797 1215935 api_server.go:131] duration metric: took 11.322082ms to wait for apiserver health ...
	I0108 22:55:44.724804 1215935 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:55:44.897192 1215935 request.go:629] Waited for 172.325635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:55:44.897252 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:55:44.897261 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:44.897270 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:44.897280 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:44.900671 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:44.900691 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:44.900700 1215935 round_trippers.go:580]     Audit-Id: eba60794-0621-447d-a844-df034dbb950f
	I0108 22:55:44.900707 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:44.900713 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:44.900719 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:44.900726 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:44.900732 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:44 GMT
	I0108 22:55:44.901788 1215935 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dhbdf","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49fd4e2f-0617-4904-8f59-17192c16fa4f","resourceVersion":"404","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62868 chars]
	I0108 22:55:44.904427 1215935 system_pods.go:59] 9 kube-system pods found
	I0108 22:55:44.904466 1215935 system_pods.go:61] "coredns-5dd5756b68-dhbdf" [49fd4e2f-0617-4904-8f59-17192c16fa4f] Running
	I0108 22:55:44.904473 1215935 system_pods.go:61] "coredns-5dd5756b68-jxvsh" [e30c3eb4-3f0a-40da-b222-8987a1951271] Running
	I0108 22:55:44.904478 1215935 system_pods.go:61] "etcd-multinode-265402" [f0b50c9c-24ac-44c2-97d3-fd32c3fd1783] Running
	I0108 22:55:44.904483 1215935 system_pods.go:61] "kindnet-q4lsx" [96b091bd-0c31-4260-b432-241ed01a60ac] Running
	I0108 22:55:44.904488 1215935 system_pods.go:61] "kube-apiserver-multinode-265402" [f89662f8-7aea-4d49-b0bc-369a5a93317e] Running
	I0108 22:55:44.904494 1215935 system_pods.go:61] "kube-controller-manager-multinode-265402" [bd5f7307-7f0b-4bd9-a40a-0c7adf289bc5] Running
	I0108 22:55:44.904506 1215935 system_pods.go:61] "kube-proxy-shpdw" [6d87b28d-e3f3-48e7-9d07-f96a102d9294] Running
	I0108 22:55:44.904512 1215935 system_pods.go:61] "kube-scheduler-multinode-265402" [8fd809b7-494a-4b6e-a556-cdb385c37788] Running
	I0108 22:55:44.904520 1215935 system_pods.go:61] "storage-provisioner" [21e5850b-39b9-4ab2-b42f-056e41fc39e0] Running
	I0108 22:55:44.904526 1215935 system_pods.go:74] duration metric: took 179.716945ms to wait for pod list to return data ...
	I0108 22:55:44.904534 1215935 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:55:45.096801 1215935 request.go:629] Waited for 192.177357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 22:55:45.096950 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0108 22:55:45.096959 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:45.096968 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:45.096983 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:45.100425 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:55:45.100470 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:45.100482 1215935 round_trippers.go:580]     Audit-Id: 216e44f7-8ea2-4540-b54e-d842e82b830f
	I0108 22:55:45.100490 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:45.100497 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:45.100504 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:45.100512 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:45.100519 1215935 round_trippers.go:580]     Content-Length: 261
	I0108 22:55:45.100526 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:45 GMT
	I0108 22:55:45.100561 1215935 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5304cb86-4bfe-4552-b782-c2e0c3fb5cee","resourceVersion":"309","creationTimestamp":"2024-01-08T22:55:10Z"}}]}
	I0108 22:55:45.100909 1215935 default_sa.go:45] found service account: "default"
	I0108 22:55:45.100937 1215935 default_sa.go:55] duration metric: took 196.384541ms for default service account to be created ...
	I0108 22:55:45.100949 1215935 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:55:45.296701 1215935 request.go:629] Waited for 195.650277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:55:45.296770 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:55:45.296780 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:45.296796 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:45.296809 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:45.301116 1215935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 22:55:45.301150 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:45.301164 1215935 round_trippers.go:580]     Audit-Id: be24a037-40d3-46cd-b8bc-6f5529de1522
	I0108 22:55:45.301172 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:45.301179 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:45.301185 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:45.301192 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:45.301200 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:45 GMT
	I0108 22:55:45.302000 1215935 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5dd5756b68-dhbdf","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49fd4e2f-0617-4904-8f59-17192c16fa4f","resourceVersion":"404","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62868 chars]
	I0108 22:55:45.306426 1215935 system_pods.go:86] 9 kube-system pods found
	I0108 22:55:45.306464 1215935 system_pods.go:89] "coredns-5dd5756b68-dhbdf" [49fd4e2f-0617-4904-8f59-17192c16fa4f] Running
	I0108 22:55:45.306473 1215935 system_pods.go:89] "coredns-5dd5756b68-jxvsh" [e30c3eb4-3f0a-40da-b222-8987a1951271] Running
	I0108 22:55:45.306481 1215935 system_pods.go:89] "etcd-multinode-265402" [f0b50c9c-24ac-44c2-97d3-fd32c3fd1783] Running
	I0108 22:55:45.306486 1215935 system_pods.go:89] "kindnet-q4lsx" [96b091bd-0c31-4260-b432-241ed01a60ac] Running
	I0108 22:55:45.306491 1215935 system_pods.go:89] "kube-apiserver-multinode-265402" [f89662f8-7aea-4d49-b0bc-369a5a93317e] Running
	I0108 22:55:45.306498 1215935 system_pods.go:89] "kube-controller-manager-multinode-265402" [bd5f7307-7f0b-4bd9-a40a-0c7adf289bc5] Running
	I0108 22:55:45.306511 1215935 system_pods.go:89] "kube-proxy-shpdw" [6d87b28d-e3f3-48e7-9d07-f96a102d9294] Running
	I0108 22:55:45.306517 1215935 system_pods.go:89] "kube-scheduler-multinode-265402" [8fd809b7-494a-4b6e-a556-cdb385c37788] Running
	I0108 22:55:45.306524 1215935 system_pods.go:89] "storage-provisioner" [21e5850b-39b9-4ab2-b42f-056e41fc39e0] Running
	I0108 22:55:45.306534 1215935 system_pods.go:126] duration metric: took 205.577742ms to wait for k8s-apps to be running ...
	I0108 22:55:45.306554 1215935 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:55:45.306629 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:55:45.326135 1215935 system_svc.go:56] duration metric: took 19.56731ms WaitForService to wait for kubelet.
	I0108 22:55:45.326213 1215935 kubeadm.go:581] duration metric: took 34.724310217s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:55:45.326240 1215935 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:55:45.496557 1215935 request.go:629] Waited for 170.232274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 22:55:45.496615 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 22:55:45.496622 1215935 round_trippers.go:469] Request Headers:
	I0108 22:55:45.496632 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:55:45.496645 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:55:45.499334 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:55:45.499483 1215935 round_trippers.go:577] Response Headers:
	I0108 22:55:45.499506 1215935 round_trippers.go:580]     Audit-Id: 07c6e58f-3410-4e80-97f4-ad6e460f388c
	I0108 22:55:45.499514 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:55:45.499521 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:55:45.499527 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:55:45.499570 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:55:45.499584 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:55:45 GMT
	I0108 22:55:45.499708 1215935 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0108 22:55:45.500250 1215935 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 22:55:45.500279 1215935 node_conditions.go:123] node cpu capacity is 2
	I0108 22:55:45.500295 1215935 node_conditions.go:105] duration metric: took 174.039289ms to run NodePressure ...
	I0108 22:55:45.500313 1215935 start.go:228] waiting for startup goroutines ...
	I0108 22:55:45.500326 1215935 start.go:233] waiting for cluster config update ...
	I0108 22:55:45.500338 1215935 start.go:242] writing updated cluster config ...
	I0108 22:55:45.503578 1215935 out.go:177] 
	I0108 22:55:45.506202 1215935 config.go:182] Loaded profile config "multinode-265402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:55:45.506292 1215935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/config.json ...
	I0108 22:55:45.508976 1215935 out.go:177] * Starting worker node multinode-265402-m02 in cluster multinode-265402
	I0108 22:55:45.511276 1215935 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:55:45.513399 1215935 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 22:55:45.515242 1215935 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:55:45.515304 1215935 cache.go:56] Caching tarball of preloaded images
	I0108 22:55:45.515353 1215935 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 22:55:45.515465 1215935 preload.go:174] Found /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0108 22:55:45.515477 1215935 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 22:55:45.515611 1215935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/config.json ...
	I0108 22:55:45.536249 1215935 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon, skipping pull
	I0108 22:55:45.536273 1215935 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in daemon, skipping load
	I0108 22:55:45.536297 1215935 cache.go:194] Successfully downloaded all kic artifacts
	I0108 22:55:45.536328 1215935 start.go:365] acquiring machines lock for multinode-265402-m02: {Name:mkddeb9047e5062cf8e480f3397a097810757d3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:55:45.536456 1215935 start.go:369] acquired machines lock for "multinode-265402-m02" in 111.203µs
	I0108 22:55:45.536484 1215935 start.go:93] Provisioning new machine with config: &{Name:multinode-265402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-265402 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 22:55:45.536565 1215935 start.go:125] createHost starting for "m02" (driver="docker")
	I0108 22:55:45.539766 1215935 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 22:55:45.539900 1215935 start.go:159] libmachine.API.Create for "multinode-265402" (driver="docker")
	I0108 22:55:45.539940 1215935 client.go:168] LocalClient.Create starting
	I0108 22:55:45.540016 1215935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem
	I0108 22:55:45.540054 1215935 main.go:141] libmachine: Decoding PEM data...
	I0108 22:55:45.540081 1215935 main.go:141] libmachine: Parsing certificate...
	I0108 22:55:45.540138 1215935 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem
	I0108 22:55:45.540167 1215935 main.go:141] libmachine: Decoding PEM data...
	I0108 22:55:45.540182 1215935 main.go:141] libmachine: Parsing certificate...
	I0108 22:55:45.540463 1215935 cli_runner.go:164] Run: docker network inspect multinode-265402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:55:45.559245 1215935 network_create.go:77] Found existing network {name:multinode-265402 subnet:0x4002d7e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0108 22:55:45.559333 1215935 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-265402-m02" container
	I0108 22:55:45.559484 1215935 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 22:55:45.577923 1215935 cli_runner.go:164] Run: docker volume create multinode-265402-m02 --label name.minikube.sigs.k8s.io=multinode-265402-m02 --label created_by.minikube.sigs.k8s.io=true
	I0108 22:55:45.597385 1215935 oci.go:103] Successfully created a docker volume multinode-265402-m02
	I0108 22:55:45.597477 1215935 cli_runner.go:164] Run: docker run --rm --name multinode-265402-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265402-m02 --entrypoint /usr/bin/test -v multinode-265402-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -d /var/lib
	I0108 22:55:46.174692 1215935 oci.go:107] Successfully prepared a docker volume multinode-265402-m02
	I0108 22:55:46.174755 1215935 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:55:46.174780 1215935 kic.go:194] Starting extracting preloaded images to volume ...
	I0108 22:55:46.174876 1215935 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-265402-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir
	I0108 22:55:50.533302 1215935 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-265402-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d -I lz4 -xf /preloaded.tar -C /extractDir: (4.358370508s)
	I0108 22:55:50.533339 1215935 kic.go:203] duration metric: took 4.358551 seconds to extract preloaded images to volume
	W0108 22:55:50.533474 1215935 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 22:55:50.533595 1215935 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 22:55:50.603181 1215935 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-265402-m02 --name multinode-265402-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265402-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-265402-m02 --network multinode-265402 --ip 192.168.58.3 --volume multinode-265402-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d
	I0108 22:55:50.974254 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402-m02 --format={{.State.Running}}
	I0108 22:55:50.995291 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402-m02 --format={{.State.Status}}
	I0108 22:55:51.029977 1215935 cli_runner.go:164] Run: docker exec multinode-265402-m02 stat /var/lib/dpkg/alternatives/iptables
	I0108 22:55:51.118132 1215935 oci.go:144] the created container "multinode-265402-m02" has a running status.
	I0108 22:55:51.118165 1215935 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa...
	I0108 22:55:51.444350 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0108 22:55:51.444396 1215935 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 22:55:51.470654 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402-m02 --format={{.State.Status}}
	I0108 22:55:51.494364 1215935 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 22:55:51.494385 1215935 kic_runner.go:114] Args: [docker exec --privileged multinode-265402-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 22:55:51.569996 1215935 cli_runner.go:164] Run: docker container inspect multinode-265402-m02 --format={{.State.Status}}
	I0108 22:55:51.594724 1215935 machine.go:88] provisioning docker machine ...
	I0108 22:55:51.594762 1215935 ubuntu.go:169] provisioning hostname "multinode-265402-m02"
	I0108 22:55:51.594828 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:51.638697 1215935 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:51.641077 1215935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34113 <nil> <nil>}
	I0108 22:55:51.641103 1215935 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-265402-m02 && echo "multinode-265402-m02" | sudo tee /etc/hostname
	I0108 22:55:51.641673 1215935 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58854->127.0.0.1:34113: read: connection reset by peer
	I0108 22:55:54.788896 1215935 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-265402-m02
	
	I0108 22:55:54.788975 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:54.813277 1215935 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:54.813695 1215935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34113 <nil> <nil>}
	I0108 22:55:54.813721 1215935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-265402-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-265402-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-265402-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:55:54.946239 1215935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:55:54.946264 1215935 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 22:55:54.946280 1215935 ubuntu.go:177] setting up certificates
	I0108 22:55:54.946291 1215935 provision.go:83] configureAuth start
	I0108 22:55:54.946350 1215935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402-m02
	I0108 22:55:54.967104 1215935 provision.go:138] copyHostCerts
	I0108 22:55:54.967144 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 22:55:54.967176 1215935 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem, removing ...
	I0108 22:55:54.967182 1215935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 22:55:54.967261 1215935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 22:55:54.967334 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 22:55:54.967349 1215935 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem, removing ...
	I0108 22:55:54.967354 1215935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 22:55:54.967379 1215935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 22:55:54.967416 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 22:55:54.967431 1215935 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem, removing ...
	I0108 22:55:54.967435 1215935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 22:55:54.967456 1215935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 22:55:54.967513 1215935 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.multinode-265402-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-265402-m02]
	I0108 22:55:55.226502 1215935 provision.go:172] copyRemoteCerts
	I0108 22:55:55.226575 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:55:55.226636 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:55.244463 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34113 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa Username:docker}
	I0108 22:55:55.345536 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 22:55:55.345601 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:55:55.375320 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 22:55:55.375387 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 22:55:55.404762 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 22:55:55.404829 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 22:55:55.434077 1215935 provision.go:86] duration metric: configureAuth took 487.773116ms
	I0108 22:55:55.434104 1215935 ubuntu.go:193] setting minikube options for container-runtime
	I0108 22:55:55.434310 1215935 config.go:182] Loaded profile config "multinode-265402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:55:55.434421 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:55.452374 1215935 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:55.452858 1215935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34113 <nil> <nil>}
	I0108 22:55:55.452877 1215935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 22:55:55.705500 1215935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 22:55:55.705528 1215935 machine.go:91] provisioned docker machine in 4.110782448s
	I0108 22:55:55.705539 1215935 client.go:171] LocalClient.Create took 10.16558936s
	I0108 22:55:55.705553 1215935 start.go:167] duration metric: libmachine.API.Create for "multinode-265402" took 10.165654246s
	I0108 22:55:55.705564 1215935 start.go:300] post-start starting for "multinode-265402-m02" (driver="docker")
	I0108 22:55:55.705574 1215935 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:55:55.705645 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:55:55.705693 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:55.730361 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34113 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa Username:docker}
	I0108 22:55:55.832169 1215935 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:55:55.836367 1215935 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0108 22:55:55.836390 1215935 command_runner.go:130] > NAME="Ubuntu"
	I0108 22:55:55.836398 1215935 command_runner.go:130] > VERSION_ID="22.04"
	I0108 22:55:55.836404 1215935 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0108 22:55:55.836410 1215935 command_runner.go:130] > VERSION_CODENAME=jammy
	I0108 22:55:55.836437 1215935 command_runner.go:130] > ID=ubuntu
	I0108 22:55:55.836448 1215935 command_runner.go:130] > ID_LIKE=debian
	I0108 22:55:55.836454 1215935 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0108 22:55:55.836465 1215935 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0108 22:55:55.836474 1215935 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0108 22:55:55.836486 1215935 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0108 22:55:55.836492 1215935 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0108 22:55:55.836608 1215935 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 22:55:55.836637 1215935 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 22:55:55.836650 1215935 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 22:55:55.836661 1215935 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0108 22:55:55.836672 1215935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 22:55:55.836731 1215935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 22:55:55.836820 1215935 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> 11522512.pem in /etc/ssl/certs
	I0108 22:55:55.836833 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> /etc/ssl/certs/11522512.pem
	I0108 22:55:55.836951 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 22:55:55.847714 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 22:55:55.877299 1215935 start.go:303] post-start completed in 171.719091ms
	I0108 22:55:55.877708 1215935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402-m02
	I0108 22:55:55.895555 1215935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/config.json ...
	I0108 22:55:55.895837 1215935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 22:55:55.895887 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:55.913943 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34113 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa Username:docker}
	I0108 22:55:56.014200 1215935 command_runner.go:130] > 19%!
	(MISSING)I0108 22:55:56.015065 1215935 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 22:55:56.021628 1215935 command_runner.go:130] > 159G
	I0108 22:55:56.021660 1215935 start.go:128] duration metric: createHost completed in 10.485084936s
	I0108 22:55:56.021670 1215935 start.go:83] releasing machines lock for "multinode-265402-m02", held for 10.485205904s
	I0108 22:55:56.021746 1215935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402-m02
	I0108 22:55:56.043344 1215935 out.go:177] * Found network options:
	I0108 22:55:56.045231 1215935 out.go:177]   - NO_PROXY=192.168.58.2
	W0108 22:55:56.047221 1215935 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 22:55:56.047263 1215935 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 22:55:56.047353 1215935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 22:55:56.047404 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:56.047715 1215935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:55:56.047773 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:55:56.071951 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34113 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa Username:docker}
	I0108 22:55:56.075196 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34113 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa Username:docker}
	I0108 22:55:56.328156 1215935 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 22:55:56.330474 1215935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 22:55:56.337186 1215935 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0108 22:55:56.337212 1215935 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0108 22:55:56.337220 1215935 command_runner.go:130] > Device: c2h/194d	Inode: 1044667     Links: 1
	I0108 22:55:56.337228 1215935 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 22:55:56.337235 1215935 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0108 22:55:56.337241 1215935 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0108 22:55:56.337247 1215935 command_runner.go:130] > Change: 2024-01-08 22:30:53.898544185 +0000
	I0108 22:55:56.337253 1215935 command_runner.go:130] >  Birth: 2024-01-08 22:30:53.898544185 +0000
	I0108 22:55:56.337646 1215935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:55:56.362294 1215935 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 22:55:56.362380 1215935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:55:56.404870 1215935 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0108 22:55:56.404914 1215935 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0108 22:55:56.404923 1215935 start.go:475] detecting cgroup driver to use...
	I0108 22:55:56.404955 1215935 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 22:55:56.405032 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 22:55:56.425457 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 22:55:56.439647 1215935 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:55:56.439723 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:55:56.455478 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:55:56.472823 1215935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:55:56.574203 1215935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:55:56.675021 1215935 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 22:55:56.675053 1215935 docker.go:219] disabling docker service ...
	I0108 22:55:56.675103 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:55:56.698516 1215935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:55:56.712526 1215935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:55:56.806699 1215935 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 22:55:56.806826 1215935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:55:56.910742 1215935 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 22:55:56.910874 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:55:56.927971 1215935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:55:56.948013 1215935 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 22:55:56.949511 1215935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 22:55:56.949609 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:55:56.962553 1215935 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 22:55:56.962643 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:55:56.977765 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:55:56.992720 1215935 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 22:55:57.007366 1215935 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:55:57.021306 1215935 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:55:57.033092 1215935 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 22:55:57.042701 1215935 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:55:57.054355 1215935 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:55:57.157506 1215935 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 22:55:57.292932 1215935 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 22:55:57.293043 1215935 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 22:55:57.297897 1215935 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 22:55:57.297972 1215935 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 22:55:57.298014 1215935 command_runner.go:130] > Device: cbh/203d	Inode: 190         Links: 1
	I0108 22:55:57.298041 1215935 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 22:55:57.298060 1215935 command_runner.go:130] > Access: 2024-01-08 22:55:57.276374699 +0000
	I0108 22:55:57.298082 1215935 command_runner.go:130] > Modify: 2024-01-08 22:55:57.276374699 +0000
	I0108 22:55:57.298114 1215935 command_runner.go:130] > Change: 2024-01-08 22:55:57.276374699 +0000
	I0108 22:55:57.298134 1215935 command_runner.go:130] >  Birth: -
	I0108 22:55:57.298423 1215935 start.go:543] Will wait 60s for crictl version
	I0108 22:55:57.298506 1215935 ssh_runner.go:195] Run: which crictl
	I0108 22:55:57.302652 1215935 command_runner.go:130] > /usr/bin/crictl
	I0108 22:55:57.303153 1215935 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:55:57.347677 1215935 command_runner.go:130] > Version:  0.1.0
	I0108 22:55:57.347749 1215935 command_runner.go:130] > RuntimeName:  cri-o
	I0108 22:55:57.347770 1215935 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0108 22:55:57.347792 1215935 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 22:55:57.350643 1215935 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0108 22:55:57.350791 1215935 ssh_runner.go:195] Run: crio --version
	I0108 22:55:57.398846 1215935 command_runner.go:130] > crio version 1.24.6
	I0108 22:55:57.398870 1215935 command_runner.go:130] > Version:          1.24.6
	I0108 22:55:57.398880 1215935 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 22:55:57.398886 1215935 command_runner.go:130] > GitTreeState:     clean
	I0108 22:55:57.398893 1215935 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 22:55:57.398898 1215935 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 22:55:57.398904 1215935 command_runner.go:130] > Compiler:         gc
	I0108 22:55:57.398909 1215935 command_runner.go:130] > Platform:         linux/arm64
	I0108 22:55:57.398915 1215935 command_runner.go:130] > Linkmode:         dynamic
	I0108 22:55:57.398936 1215935 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 22:55:57.398946 1215935 command_runner.go:130] > SeccompEnabled:   true
	I0108 22:55:57.398961 1215935 command_runner.go:130] > AppArmorEnabled:  false
	I0108 22:55:57.401146 1215935 ssh_runner.go:195] Run: crio --version
	I0108 22:55:57.442628 1215935 command_runner.go:130] > crio version 1.24.6
	I0108 22:55:57.442661 1215935 command_runner.go:130] > Version:          1.24.6
	I0108 22:55:57.442670 1215935 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0108 22:55:57.442676 1215935 command_runner.go:130] > GitTreeState:     clean
	I0108 22:55:57.442698 1215935 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0108 22:55:57.442711 1215935 command_runner.go:130] > GoVersion:        go1.18.2
	I0108 22:55:57.442736 1215935 command_runner.go:130] > Compiler:         gc
	I0108 22:55:57.442749 1215935 command_runner.go:130] > Platform:         linux/arm64
	I0108 22:55:57.442783 1215935 command_runner.go:130] > Linkmode:         dynamic
	I0108 22:55:57.442814 1215935 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 22:55:57.442827 1215935 command_runner.go:130] > SeccompEnabled:   true
	I0108 22:55:57.442832 1215935 command_runner.go:130] > AppArmorEnabled:  false
	I0108 22:55:57.448524 1215935 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0108 22:55:57.451335 1215935 out.go:177]   - env NO_PROXY=192.168.58.2
	I0108 22:55:57.453441 1215935 cli_runner.go:164] Run: docker network inspect multinode-265402 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 22:55:57.471550 1215935 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0108 22:55:57.476436 1215935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:55:57.490186 1215935 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402 for IP: 192.168.58.3
	I0108 22:55:57.490216 1215935 certs.go:190] acquiring lock for shared ca certs: {Name:mk2f5e9ada40477437d91c2ac8d6b62bb5d1e97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:57.490345 1215935 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key
	I0108 22:55:57.490381 1215935 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key
	I0108 22:55:57.490390 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 22:55:57.490404 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 22:55:57.490429 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 22:55:57.490441 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 22:55:57.490500 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem (1338 bytes)
	W0108 22:55:57.490528 1215935 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251_empty.pem, impossibly tiny 0 bytes
	I0108 22:55:57.490537 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 22:55:57.490563 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:55:57.490586 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:55:57.490608 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem (1675 bytes)
	I0108 22:55:57.490657 1215935 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 22:55:57.490686 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> /usr/share/ca-certificates/11522512.pem
	I0108 22:55:57.490698 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:57.490709 1215935 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem -> /usr/share/ca-certificates/1152251.pem
	I0108 22:55:57.491036 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:55:57.520737 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 22:55:57.550839 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:55:57.579772 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:55:57.609975 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /usr/share/ca-certificates/11522512.pem (1708 bytes)
	I0108 22:55:57.640626 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:55:57.670240 1215935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/1152251.pem --> /usr/share/ca-certificates/1152251.pem (1338 bytes)
	I0108 22:55:57.699813 1215935 ssh_runner.go:195] Run: openssl version
	I0108 22:55:57.706703 1215935 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0108 22:55:57.707130 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11522512.pem && ln -fs /usr/share/ca-certificates/11522512.pem /etc/ssl/certs/11522512.pem"
	I0108 22:55:57.719777 1215935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11522512.pem
	I0108 22:55:57.724283 1215935 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 22:39 /usr/share/ca-certificates/11522512.pem
	I0108 22:55:57.724545 1215935 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 22:39 /usr/share/ca-certificates/11522512.pem
	I0108 22:55:57.724606 1215935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11522512.pem
	I0108 22:55:57.732706 1215935 command_runner.go:130] > 3ec20f2e
	I0108 22:55:57.733133 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11522512.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 22:55:57.744916 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:55:57.756656 1215935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:57.761534 1215935 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:57.761583 1215935 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:31 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:57.761635 1215935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:57.770243 1215935 command_runner.go:130] > b5213941
	I0108 22:55:57.770697 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:55:57.782828 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1152251.pem && ln -fs /usr/share/ca-certificates/1152251.pem /etc/ssl/certs/1152251.pem"
	I0108 22:55:57.794487 1215935 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1152251.pem
	I0108 22:55:57.799319 1215935 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 22:39 /usr/share/ca-certificates/1152251.pem
	I0108 22:55:57.799348 1215935 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 22:39 /usr/share/ca-certificates/1152251.pem
	I0108 22:55:57.799404 1215935 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1152251.pem
	I0108 22:55:57.807720 1215935 command_runner.go:130] > 51391683
	I0108 22:55:57.808159 1215935 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1152251.pem /etc/ssl/certs/51391683.0"
	I0108 22:55:57.820336 1215935 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:55:57.824766 1215935 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:55:57.824800 1215935 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:55:57.824954 1215935 ssh_runner.go:195] Run: crio config
	I0108 22:55:57.879543 1215935 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 22:55:57.879573 1215935 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 22:55:57.879583 1215935 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 22:55:57.879588 1215935 command_runner.go:130] > #
	I0108 22:55:57.879597 1215935 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 22:55:57.879605 1215935 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 22:55:57.879614 1215935 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 22:55:57.879630 1215935 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 22:55:57.879638 1215935 command_runner.go:130] > # reload'.
	I0108 22:55:57.879654 1215935 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 22:55:57.879665 1215935 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 22:55:57.879673 1215935 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 22:55:57.879685 1215935 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 22:55:57.879691 1215935 command_runner.go:130] > [crio]
	I0108 22:55:57.879699 1215935 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 22:55:57.879708 1215935 command_runner.go:130] > # containers images, in this directory.
	I0108 22:55:57.879719 1215935 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0108 22:55:57.879730 1215935 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 22:55:57.879908 1215935 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0108 22:55:57.879931 1215935 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 22:55:57.879940 1215935 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 22:55:57.879949 1215935 command_runner.go:130] > # storage_driver = "vfs"
	I0108 22:55:57.879964 1215935 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 22:55:57.879972 1215935 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 22:55:57.879984 1215935 command_runner.go:130] > # storage_option = [
	I0108 22:55:57.880147 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.880163 1215935 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 22:55:57.880171 1215935 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 22:55:57.880185 1215935 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 22:55:57.880192 1215935 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 22:55:57.880203 1215935 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 22:55:57.880211 1215935 command_runner.go:130] > # always happen on a node reboot
	I0108 22:55:57.880220 1215935 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 22:55:57.880227 1215935 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 22:55:57.880236 1215935 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 22:55:57.880245 1215935 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 22:55:57.880255 1215935 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 22:55:57.880265 1215935 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 22:55:57.880279 1215935 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 22:55:57.880286 1215935 command_runner.go:130] > # internal_wipe = true
	I0108 22:55:57.880297 1215935 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 22:55:57.880305 1215935 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 22:55:57.880312 1215935 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 22:55:57.880321 1215935 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 22:55:57.880328 1215935 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 22:55:57.880336 1215935 command_runner.go:130] > [crio.api]
	I0108 22:55:57.880343 1215935 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 22:55:57.880354 1215935 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 22:55:57.880360 1215935 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 22:55:57.880370 1215935 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 22:55:57.880379 1215935 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 22:55:57.880389 1215935 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 22:55:57.880395 1215935 command_runner.go:130] > # stream_port = "0"
	I0108 22:55:57.880401 1215935 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 22:55:57.880410 1215935 command_runner.go:130] > # stream_enable_tls = false
	I0108 22:55:57.880417 1215935 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 22:55:57.880427 1215935 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 22:55:57.880435 1215935 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 22:55:57.880446 1215935 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 22:55:57.880451 1215935 command_runner.go:130] > # minutes.
	I0108 22:55:57.880460 1215935 command_runner.go:130] > # stream_tls_cert = ""
	I0108 22:55:57.880468 1215935 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 22:55:57.880480 1215935 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 22:55:57.880485 1215935 command_runner.go:130] > # stream_tls_key = ""
	I0108 22:55:57.880493 1215935 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 22:55:57.880503 1215935 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 22:55:57.880510 1215935 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 22:55:57.880520 1215935 command_runner.go:130] > # stream_tls_ca = ""
	I0108 22:55:57.880530 1215935 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 22:55:57.880539 1215935 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0108 22:55:57.880548 1215935 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 22:55:57.880557 1215935 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0108 22:55:57.880573 1215935 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 22:55:57.880582 1215935 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 22:55:57.880595 1215935 command_runner.go:130] > [crio.runtime]
	I0108 22:55:57.880607 1215935 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 22:55:57.880614 1215935 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 22:55:57.880623 1215935 command_runner.go:130] > # "nofile=1024:2048"
	I0108 22:55:57.880630 1215935 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 22:55:57.880640 1215935 command_runner.go:130] > # default_ulimits = [
	I0108 22:55:57.880645 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.880652 1215935 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 22:55:57.880661 1215935 command_runner.go:130] > # no_pivot = false
	I0108 22:55:57.880668 1215935 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 22:55:57.880676 1215935 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 22:55:57.880685 1215935 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 22:55:57.880692 1215935 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 22:55:57.880703 1215935 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 22:55:57.880711 1215935 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 22:55:57.880928 1215935 command_runner.go:130] > # conmon = ""
	I0108 22:55:57.880942 1215935 command_runner.go:130] > # Cgroup setting for conmon
	I0108 22:55:57.880951 1215935 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 22:55:57.880957 1215935 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 22:55:57.880967 1215935 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 22:55:57.880974 1215935 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 22:55:57.880989 1215935 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 22:55:57.881014 1215935 command_runner.go:130] > # conmon_env = [
	I0108 22:55:57.881019 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.881026 1215935 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 22:55:57.881039 1215935 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 22:55:57.881047 1215935 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 22:55:57.881054 1215935 command_runner.go:130] > # default_env = [
	I0108 22:55:57.881059 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.881077 1215935 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 22:55:57.881087 1215935 command_runner.go:130] > # selinux = false
	I0108 22:55:57.881095 1215935 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 22:55:57.881109 1215935 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 22:55:57.881116 1215935 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 22:55:57.881125 1215935 command_runner.go:130] > # seccomp_profile = ""
	I0108 22:55:57.881133 1215935 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 22:55:57.881141 1215935 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 22:55:57.881150 1215935 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 22:55:57.881159 1215935 command_runner.go:130] > # which might increase security.
	I0108 22:55:57.881165 1215935 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0108 22:55:57.881177 1215935 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 22:55:57.881186 1215935 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 22:55:57.881197 1215935 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 22:55:57.881205 1215935 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 22:55:57.881215 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:55:57.881221 1215935 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 22:55:57.881231 1215935 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 22:55:57.881237 1215935 command_runner.go:130] > # the cgroup blockio controller.
	I0108 22:55:57.881243 1215935 command_runner.go:130] > # blockio_config_file = ""
	I0108 22:55:57.881253 1215935 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 22:55:57.881263 1215935 command_runner.go:130] > # irqbalance daemon.
	I0108 22:55:57.881271 1215935 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 22:55:57.881284 1215935 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 22:55:57.881292 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:55:57.881424 1215935 command_runner.go:130] > # rdt_config_file = ""
	I0108 22:55:57.881439 1215935 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 22:55:57.881450 1215935 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 22:55:57.881464 1215935 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 22:55:57.881470 1215935 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 22:55:57.881478 1215935 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 22:55:57.881489 1215935 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 22:55:57.881494 1215935 command_runner.go:130] > # will be added.
	I0108 22:55:57.881503 1215935 command_runner.go:130] > # default_capabilities = [
	I0108 22:55:57.881508 1215935 command_runner.go:130] > # 	"CHOWN",
	I0108 22:55:57.881515 1215935 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 22:55:57.881522 1215935 command_runner.go:130] > # 	"FSETID",
	I0108 22:55:57.881527 1215935 command_runner.go:130] > # 	"FOWNER",
	I0108 22:55:57.881536 1215935 command_runner.go:130] > # 	"SETGID",
	I0108 22:55:57.881561 1215935 command_runner.go:130] > # 	"SETUID",
	I0108 22:55:57.881569 1215935 command_runner.go:130] > # 	"SETPCAP",
	I0108 22:55:57.881575 1215935 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 22:55:57.881581 1215935 command_runner.go:130] > # 	"KILL",
	I0108 22:55:57.881585 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.881595 1215935 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0108 22:55:57.881609 1215935 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0108 22:55:57.881616 1215935 command_runner.go:130] > # add_inheritable_capabilities = true
	I0108 22:55:57.881626 1215935 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 22:55:57.881638 1215935 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 22:55:57.881644 1215935 command_runner.go:130] > # default_sysctls = [
	I0108 22:55:57.881790 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.881803 1215935 command_runner.go:130] > # List of devices on the host that a
	I0108 22:55:57.881811 1215935 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 22:55:57.881817 1215935 command_runner.go:130] > # allowed_devices = [
	I0108 22:55:57.881824 1215935 command_runner.go:130] > # 	"/dev/fuse",
	I0108 22:55:57.881828 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.881841 1215935 command_runner.go:130] > # List of additional devices. specified as
	I0108 22:55:57.881859 1215935 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 22:55:57.881870 1215935 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 22:55:57.881878 1215935 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 22:55:57.881886 1215935 command_runner.go:130] > # additional_devices = [
	I0108 22:55:57.881892 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.881902 1215935 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 22:55:57.881910 1215935 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 22:55:57.881917 1215935 command_runner.go:130] > # 	"/etc/cdi",
	I0108 22:55:57.881925 1215935 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 22:55:57.881930 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.881942 1215935 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 22:55:57.881953 1215935 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 22:55:57.881962 1215935 command_runner.go:130] > # Defaults to false.
	I0108 22:55:57.881968 1215935 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 22:55:57.881977 1215935 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 22:55:57.881987 1215935 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 22:55:57.881992 1215935 command_runner.go:130] > # hooks_dir = [
	I0108 22:55:57.881998 1215935 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 22:55:57.882003 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.882012 1215935 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 22:55:57.882022 1215935 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 22:55:57.882032 1215935 command_runner.go:130] > # its default mounts from the following two files:
	I0108 22:55:57.882037 1215935 command_runner.go:130] > #
	I0108 22:55:57.882045 1215935 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 22:55:57.882056 1215935 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 22:55:57.882064 1215935 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 22:55:57.882071 1215935 command_runner.go:130] > #
	I0108 22:55:57.882079 1215935 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 22:55:57.882087 1215935 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 22:55:57.882097 1215935 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 22:55:57.882103 1215935 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 22:55:57.882109 1215935 command_runner.go:130] > #
	I0108 22:55:57.882122 1215935 command_runner.go:130] > # default_mounts_file = ""
	I0108 22:55:57.882132 1215935 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 22:55:57.882141 1215935 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 22:55:57.882149 1215935 command_runner.go:130] > # pids_limit = 0
	I0108 22:55:57.882157 1215935 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 22:55:57.882168 1215935 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 22:55:57.882176 1215935 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 22:55:57.882188 1215935 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 22:55:57.882349 1215935 command_runner.go:130] > # log_size_max = -1
	I0108 22:55:57.882368 1215935 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 22:55:57.882374 1215935 command_runner.go:130] > # log_to_journald = false
	I0108 22:55:57.882384 1215935 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 22:55:57.882395 1215935 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 22:55:57.882402 1215935 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 22:55:57.882408 1215935 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 22:55:57.882416 1215935 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 22:55:57.882424 1215935 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 22:55:57.882434 1215935 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 22:55:57.882439 1215935 command_runner.go:130] > # read_only = false
	I0108 22:55:57.882450 1215935 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 22:55:57.882459 1215935 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 22:55:57.882468 1215935 command_runner.go:130] > # live configuration reload.
	I0108 22:55:57.882475 1215935 command_runner.go:130] > # log_level = "info"
	I0108 22:55:57.882486 1215935 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 22:55:57.882492 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:55:57.882497 1215935 command_runner.go:130] > # log_filter = ""
	I0108 22:55:57.882508 1215935 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 22:55:57.882518 1215935 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 22:55:57.882528 1215935 command_runner.go:130] > # separated by comma.
	I0108 22:55:57.882533 1215935 command_runner.go:130] > # uid_mappings = ""
	I0108 22:55:57.882541 1215935 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 22:55:57.882552 1215935 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 22:55:57.882557 1215935 command_runner.go:130] > # separated by comma.
	I0108 22:55:57.882740 1215935 command_runner.go:130] > # gid_mappings = ""
	I0108 22:55:57.882757 1215935 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 22:55:57.882765 1215935 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 22:55:57.882774 1215935 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 22:55:57.882779 1215935 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 22:55:57.882790 1215935 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 22:55:57.882798 1215935 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 22:55:57.882809 1215935 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 22:55:57.882815 1215935 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 22:55:57.882827 1215935 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 22:55:57.882836 1215935 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 22:55:57.882847 1215935 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 22:55:57.882852 1215935 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 22:55:57.882859 1215935 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 22:55:57.882867 1215935 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 22:55:57.882877 1215935 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 22:55:57.882883 1215935 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 22:55:57.882892 1215935 command_runner.go:130] > # drop_infra_ctr = true
	I0108 22:55:57.882899 1215935 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 22:55:57.882910 1215935 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 22:55:57.882920 1215935 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 22:55:57.882929 1215935 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 22:55:57.882937 1215935 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 22:55:57.882944 1215935 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 22:55:57.882950 1215935 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 22:55:57.882961 1215935 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 22:55:57.882971 1215935 command_runner.go:130] > # pinns_path = ""
	I0108 22:55:57.882979 1215935 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 22:55:57.882991 1215935 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 22:55:57.882999 1215935 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 22:55:57.883007 1215935 command_runner.go:130] > # default_runtime = "runc"
	I0108 22:55:57.883014 1215935 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 22:55:57.883023 1215935 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 22:55:57.883034 1215935 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 22:55:57.883045 1215935 command_runner.go:130] > # creation as a file is not desired either.
	I0108 22:55:57.883056 1215935 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 22:55:57.883066 1215935 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 22:55:57.883073 1215935 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 22:55:57.883082 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.883090 1215935 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 22:55:57.883098 1215935 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 22:55:57.883106 1215935 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 22:55:57.883120 1215935 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 22:55:57.883125 1215935 command_runner.go:130] > #
	I0108 22:55:57.883136 1215935 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 22:55:57.883143 1215935 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 22:55:57.883153 1215935 command_runner.go:130] > #  runtime_type = "oci"
	I0108 22:55:57.883159 1215935 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 22:55:57.883166 1215935 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 22:55:57.883178 1215935 command_runner.go:130] > #  allowed_annotations = []
	I0108 22:55:57.883183 1215935 command_runner.go:130] > # Where:
	I0108 22:55:57.883190 1215935 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 22:55:57.883197 1215935 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 22:55:57.883205 1215935 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 22:55:57.883216 1215935 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 22:55:57.883222 1215935 command_runner.go:130] > #   in $PATH.
	I0108 22:55:57.883235 1215935 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 22:55:57.883241 1215935 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 22:55:57.883253 1215935 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 22:55:57.883259 1215935 command_runner.go:130] > #   state.
	I0108 22:55:57.883270 1215935 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 22:55:57.883284 1215935 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 22:55:57.883293 1215935 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 22:55:57.883304 1215935 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 22:55:57.883313 1215935 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 22:55:57.883324 1215935 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 22:55:57.883330 1215935 command_runner.go:130] > #   The currently recognized values are:
	I0108 22:55:57.883342 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 22:55:57.883351 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 22:55:57.883358 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 22:55:57.883366 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 22:55:57.883379 1215935 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 22:55:57.883389 1215935 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 22:55:57.883401 1215935 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 22:55:57.883410 1215935 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 22:55:57.883419 1215935 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 22:55:57.883425 1215935 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 22:55:57.883431 1215935 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0108 22:55:57.883436 1215935 command_runner.go:130] > runtime_type = "oci"
	I0108 22:55:57.883441 1215935 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 22:55:57.883451 1215935 command_runner.go:130] > runtime_config_path = ""
	I0108 22:55:57.883457 1215935 command_runner.go:130] > monitor_path = ""
	I0108 22:55:57.883462 1215935 command_runner.go:130] > monitor_cgroup = ""
	I0108 22:55:57.883471 1215935 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 22:55:57.883486 1215935 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 22:55:57.883495 1215935 command_runner.go:130] > # running containers
	I0108 22:55:57.883500 1215935 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 22:55:57.883508 1215935 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 22:55:57.883516 1215935 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 22:55:57.883529 1215935 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 22:55:57.883536 1215935 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 22:55:57.883546 1215935 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 22:55:57.883552 1215935 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 22:55:57.883558 1215935 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 22:55:57.883568 1215935 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 22:55:57.883575 1215935 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 22:55:57.883587 1215935 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 22:55:57.883594 1215935 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 22:55:57.883602 1215935 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 22:55:57.883611 1215935 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 22:55:57.883626 1215935 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 22:55:57.883634 1215935 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 22:55:57.883648 1215935 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 22:55:57.883663 1215935 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 22:55:57.883670 1215935 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 22:55:57.883679 1215935 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 22:55:57.883688 1215935 command_runner.go:130] > # Example:
	I0108 22:55:57.883695 1215935 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 22:55:57.883706 1215935 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 22:55:57.883713 1215935 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 22:55:57.883723 1215935 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 22:55:57.883728 1215935 command_runner.go:130] > # cpuset = 0
	I0108 22:55:57.883738 1215935 command_runner.go:130] > # cpushares = "0-1"
	I0108 22:55:57.883743 1215935 command_runner.go:130] > # Where:
	I0108 22:55:57.883749 1215935 command_runner.go:130] > # The workload name is workload-type.
	I0108 22:55:57.883758 1215935 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 22:55:57.883765 1215935 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 22:55:57.883773 1215935 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 22:55:57.883786 1215935 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 22:55:57.883794 1215935 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 22:55:57.884020 1215935 command_runner.go:130] > # 
	I0108 22:55:57.884045 1215935 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 22:55:57.884050 1215935 command_runner.go:130] > #
	I0108 22:55:57.884058 1215935 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 22:55:57.884066 1215935 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 22:55:57.884074 1215935 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 22:55:57.884085 1215935 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 22:55:57.884092 1215935 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 22:55:57.884101 1215935 command_runner.go:130] > [crio.image]
	I0108 22:55:57.884109 1215935 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 22:55:57.884114 1215935 command_runner.go:130] > # default_transport = "docker://"
	I0108 22:55:57.884126 1215935 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 22:55:57.884134 1215935 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 22:55:57.884139 1215935 command_runner.go:130] > # global_auth_file = ""
	I0108 22:55:57.884146 1215935 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 22:55:57.884158 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:55:57.884164 1215935 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 22:55:57.884176 1215935 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 22:55:57.884184 1215935 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 22:55:57.884194 1215935 command_runner.go:130] > # This option supports live configuration reload.
	I0108 22:55:57.884200 1215935 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 22:55:57.884211 1215935 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 22:55:57.884218 1215935 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 22:55:57.884226 1215935 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 22:55:57.884233 1215935 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 22:55:57.884239 1215935 command_runner.go:130] > # pause_command = "/pause"
	I0108 22:55:57.884250 1215935 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 22:55:57.884257 1215935 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 22:55:57.884268 1215935 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 22:55:57.884276 1215935 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 22:55:57.884286 1215935 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 22:55:57.884291 1215935 command_runner.go:130] > # signature_policy = ""
	I0108 22:55:57.884299 1215935 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 22:55:57.884307 1215935 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 22:55:57.884312 1215935 command_runner.go:130] > # changing them here.
	I0108 22:55:57.884321 1215935 command_runner.go:130] > # insecure_registries = [
	I0108 22:55:57.884449 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.884478 1215935 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 22:55:57.884487 1215935 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 22:55:57.884497 1215935 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 22:55:57.884504 1215935 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 22:55:57.884509 1215935 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 22:55:57.884518 1215935 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 22:55:57.884528 1215935 command_runner.go:130] > # CNI plugins.
	I0108 22:55:57.884534 1215935 command_runner.go:130] > [crio.network]
	I0108 22:55:57.884544 1215935 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 22:55:57.884565 1215935 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 22:55:57.884571 1215935 command_runner.go:130] > # cni_default_network = ""
	I0108 22:55:57.884578 1215935 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 22:55:57.884584 1215935 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 22:55:57.884591 1215935 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 22:55:57.884602 1215935 command_runner.go:130] > # plugin_dirs = [
	I0108 22:55:57.884610 1215935 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 22:55:57.884614 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.884624 1215935 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 22:55:57.884632 1215935 command_runner.go:130] > [crio.metrics]
	I0108 22:55:57.884638 1215935 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 22:55:57.884646 1215935 command_runner.go:130] > # enable_metrics = false
	I0108 22:55:57.884652 1215935 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 22:55:57.884659 1215935 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 22:55:57.884666 1215935 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 22:55:57.884675 1215935 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 22:55:57.884684 1215935 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 22:55:57.884691 1215935 command_runner.go:130] > # metrics_collectors = [
	I0108 22:55:57.884696 1215935 command_runner.go:130] > # 	"operations",
	I0108 22:55:57.884705 1215935 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 22:55:57.884711 1215935 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 22:55:57.884719 1215935 command_runner.go:130] > # 	"operations_errors",
	I0108 22:55:57.884729 1215935 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 22:55:57.884735 1215935 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 22:55:57.884740 1215935 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 22:55:57.884746 1215935 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 22:55:57.884753 1215935 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 22:55:57.884759 1215935 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 22:55:57.884766 1215935 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 22:55:57.884771 1215935 command_runner.go:130] > # 	"containers_oom_total",
	I0108 22:55:57.884776 1215935 command_runner.go:130] > # 	"containers_oom",
	I0108 22:55:57.884782 1215935 command_runner.go:130] > # 	"processes_defunct",
	I0108 22:55:57.884789 1215935 command_runner.go:130] > # 	"operations_total",
	I0108 22:55:57.884794 1215935 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 22:55:57.885052 1215935 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 22:55:57.885068 1215935 command_runner.go:130] > # 	"operations_errors_total",
	I0108 22:55:57.885074 1215935 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 22:55:57.885081 1215935 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 22:55:57.885087 1215935 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 22:55:57.885095 1215935 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 22:55:57.885101 1215935 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 22:55:57.885106 1215935 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 22:55:57.885114 1215935 command_runner.go:130] > # ]
	I0108 22:55:57.885120 1215935 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 22:55:57.885126 1215935 command_runner.go:130] > # metrics_port = 9090
	I0108 22:55:57.885133 1215935 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 22:55:57.885138 1215935 command_runner.go:130] > # metrics_socket = ""
	I0108 22:55:57.885146 1215935 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 22:55:57.885156 1215935 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 22:55:57.885164 1215935 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 22:55:57.885173 1215935 command_runner.go:130] > # certificate on any modification event.
	I0108 22:55:57.885178 1215935 command_runner.go:130] > # metrics_cert = ""
	I0108 22:55:57.885186 1215935 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 22:55:57.885195 1215935 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 22:55:57.885201 1215935 command_runner.go:130] > # metrics_key = ""
	I0108 22:55:57.885208 1215935 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 22:55:57.885213 1215935 command_runner.go:130] > [crio.tracing]
	I0108 22:55:57.885220 1215935 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 22:55:57.885227 1215935 command_runner.go:130] > # enable_tracing = false
	I0108 22:55:57.885237 1215935 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 22:55:57.885243 1215935 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 22:55:57.885251 1215935 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 22:55:57.885261 1215935 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 22:55:57.885268 1215935 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 22:55:57.885276 1215935 command_runner.go:130] > [crio.stats]
	I0108 22:55:57.885283 1215935 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 22:55:57.885290 1215935 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 22:55:57.885428 1215935 command_runner.go:130] > # stats_collection_period = 0
	I0108 22:55:57.887431 1215935 command_runner.go:130] ! time="2024-01-08 22:55:57.876936260Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0108 22:55:57.887454 1215935 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 22:55:57.887901 1215935 cni.go:84] Creating CNI manager for ""
	I0108 22:55:57.887916 1215935 cni.go:136] 2 nodes found, recommending kindnet
	I0108 22:55:57.887925 1215935 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:55:57.887945 1215935 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-265402 NodeName:multinode-265402-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:55:57.888075 1215935 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-265402-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:55:57.888136 1215935 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-265402-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-265402 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:55:57.888205 1215935 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:55:57.898085 1215935 command_runner.go:130] > kubeadm
	I0108 22:55:57.898120 1215935 command_runner.go:130] > kubectl
	I0108 22:55:57.898126 1215935 command_runner.go:130] > kubelet
	I0108 22:55:57.899311 1215935 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:55:57.899386 1215935 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 22:55:57.910289 1215935 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0108 22:55:57.932791 1215935 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:55:57.955139 1215935 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0108 22:55:57.960152 1215935 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:55:57.974088 1215935 host.go:66] Checking if "multinode-265402" exists ...
	I0108 22:55:57.974361 1215935 start.go:304] JoinCluster: &{Name:multinode-265402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-265402 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:55:57.974452 1215935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 22:55:57.974520 1215935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:55:57.974899 1215935 config.go:182] Loaded profile config "multinode-265402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:55:57.992971 1215935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:55:58.167765 1215935 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token f920xu.8liojajdr3bx6kpe --discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 
	I0108 22:55:58.167848 1215935 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 22:55:58.167878 1215935 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f920xu.8liojajdr3bx6kpe --discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-265402-m02"
	I0108 22:55:58.212828 1215935 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 22:55:58.253139 1215935 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0108 22:55:58.253164 1215935 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0108 22:55:58.253172 1215935 command_runner.go:130] > OS: Linux
	I0108 22:55:58.253179 1215935 command_runner.go:130] > CGROUPS_CPU: enabled
	I0108 22:55:58.253190 1215935 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0108 22:55:58.253196 1215935 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0108 22:55:58.253206 1215935 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0108 22:55:58.253213 1215935 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0108 22:55:58.253222 1215935 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0108 22:55:58.253241 1215935 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0108 22:55:58.253252 1215935 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0108 22:55:58.253259 1215935 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0108 22:55:58.386722 1215935 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 22:55:58.386747 1215935 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 22:55:58.430486 1215935 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:55:58.430512 1215935 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:55:58.430519 1215935 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 22:55:58.543691 1215935 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 22:56:01.558469 1215935 command_runner.go:130] > This node has joined the cluster:
	I0108 22:56:01.558496 1215935 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 22:56:01.558505 1215935 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 22:56:01.558513 1215935 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 22:56:01.561892 1215935 command_runner.go:130] ! W0108 22:55:58.212295    1030 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 22:56:01.561965 1215935 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0108 22:56:01.561985 1215935 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:56:01.562011 1215935 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f920xu.8liojajdr3bx6kpe --discovery-token-ca-cert-hash sha256:43482142ca7c9b39b0f61eec38302a813b49d9c24791d86fa0d2c75ce3e42066 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-265402-m02": (3.394121186s)
	I0108 22:56:01.562046 1215935 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 22:56:01.790485 1215935 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0108 22:56:01.790607 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-265402 minikube.k8s.io/updated_at=2024_01_08T22_56_01_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:56:01.893658 1215935 command_runner.go:130] > node/multinode-265402-m02 labeled
	I0108 22:56:01.897295 1215935 start.go:306] JoinCluster complete in 3.922928804s
	I0108 22:56:01.897323 1215935 cni.go:84] Creating CNI manager for ""
	I0108 22:56:01.897329 1215935 cni.go:136] 2 nodes found, recommending kindnet
	I0108 22:56:01.897387 1215935 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 22:56:01.902386 1215935 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 22:56:01.902410 1215935 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0108 22:56:01.902427 1215935 command_runner.go:130] > Device: 3ah/58d	Inode: 1051177     Links: 1
	I0108 22:56:01.902435 1215935 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 22:56:01.902442 1215935 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0108 22:56:01.902448 1215935 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0108 22:56:01.902455 1215935 command_runner.go:130] > Change: 2024-01-08 22:30:54.582539572 +0000
	I0108 22:56:01.902461 1215935 command_runner.go:130] >  Birth: 2024-01-08 22:30:54.538539869 +0000
	I0108 22:56:01.902520 1215935 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 22:56:01.902528 1215935 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 22:56:01.924607 1215935 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 22:56:02.205054 1215935 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 22:56:02.209904 1215935 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 22:56:02.212859 1215935 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 22:56:02.226310 1215935 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 22:56:02.232232 1215935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:56:02.232514 1215935 kapi.go:59] client config for multinode-265402: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:56:02.232881 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 22:56:02.232897 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:02.232907 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:02.232914 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:02.235446 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:02.235466 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:02.235475 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:02.235481 1215935 round_trippers.go:580]     Content-Length: 291
	I0108 22:56:02.235487 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:02 GMT
	I0108 22:56:02.235494 1215935 round_trippers.go:580]     Audit-Id: 108a17be-0fef-44ce-a296-38f0066eda25
	I0108 22:56:02.235500 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:02.235506 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:02.235512 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:02.235534 1215935 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2016178b-5f38-485d-8a66-5b0370c26d64","resourceVersion":"408","creationTimestamp":"2024-01-08T22:54:57Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 22:56:02.235655 1215935 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2016178b-5f38-485d-8a66-5b0370c26d64","resourceVersion":"408","creationTimestamp":"2024-01-08T22:54:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 22:56:02.235698 1215935 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 22:56:02.235703 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:02.235710 1215935 round_trippers.go:473]     Content-Type: application/json
	I0108 22:56:02.235717 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:02.235724 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:02.242593 1215935 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 22:56:02.242612 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:02.242621 1215935 round_trippers.go:580]     Audit-Id: c731d64b-09c6-4fd7-bedc-cb186a88e591
	I0108 22:56:02.242627 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:02.242633 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:02.242639 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:02.242645 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:02.242665 1215935 round_trippers.go:580]     Content-Length: 291
	I0108 22:56:02.242672 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:02 GMT
	I0108 22:56:02.242698 1215935 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2016178b-5f38-485d-8a66-5b0370c26d64","resourceVersion":"448","creationTimestamp":"2024-01-08T22:54:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 22:56:02.733283 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 22:56:02.733309 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:02.733318 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:02.733326 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:02.735780 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:02.735801 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:02.735810 1215935 round_trippers.go:580]     Audit-Id: ff483e59-fe4a-463c-9bac-b72e5f058bff
	I0108 22:56:02.735817 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:02.735828 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:02.735834 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:02.735840 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:02.735846 1215935 round_trippers.go:580]     Content-Length: 291
	I0108 22:56:02.735853 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:02 GMT
	I0108 22:56:02.736061 1215935 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2016178b-5f38-485d-8a66-5b0370c26d64","resourceVersion":"459","creationTimestamp":"2024-01-08T22:54:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 22:56:02.736156 1215935 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-265402" context rescaled to 1 replicas
	I0108 22:56:02.736181 1215935 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 22:56:02.738392 1215935 out.go:177] * Verifying Kubernetes components...
	I0108 22:56:02.740564 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:56:02.754512 1215935 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:56:02.754784 1215935 kapi.go:59] client config for multinode-265402: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/multinode-265402/client.key", CAFile:"/home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9a60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 22:56:02.755069 1215935 node_ready.go:35] waiting up to 6m0s for node "multinode-265402-m02" to be "Ready" ...
	I0108 22:56:02.755153 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:02.755165 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:02.755174 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:02.755181 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:02.757953 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:02.757974 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:02.757982 1215935 round_trippers.go:580]     Audit-Id: ff737c24-11f9-442b-9b91-4cb2d370119d
	I0108 22:56:02.757991 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:02.757997 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:02.758004 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:02.758010 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:02.758017 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:02 GMT
	I0108 22:56:02.758179 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"447","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0108 22:56:03.255840 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:03.255864 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:03.255874 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:03.255882 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:03.258434 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:03.258466 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:03.258475 1215935 round_trippers.go:580]     Audit-Id: 74a11c79-90ed-4754-b8ce-f9a8f440e340
	I0108 22:56:03.258482 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:03.258489 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:03.258495 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:03.258502 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:03.258510 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:03 GMT
	I0108 22:56:03.258816 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"447","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0108 22:56:03.755419 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:03.755448 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:03.755458 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:03.755466 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:03.757949 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:03.757972 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:03.757980 1215935 round_trippers.go:580]     Audit-Id: 403f5c25-837f-4215-9126-86d9274a158c
	I0108 22:56:03.757986 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:03.757993 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:03.757998 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:03.758005 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:03.758010 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:03 GMT
	I0108 22:56:03.758264 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"447","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0108 22:56:04.255918 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:04.255944 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:04.255954 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:04.255961 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:04.258481 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:04.258507 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:04.258516 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:04 GMT
	I0108 22:56:04.258523 1215935 round_trippers.go:580]     Audit-Id: d7ccc5be-9111-4a97-bc72-ff114d008e21
	I0108 22:56:04.258529 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:04.258540 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:04.258547 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:04.258560 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:04.258778 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"447","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0108 22:56:04.756130 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:04.756155 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:04.756165 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:04.756173 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:04.759672 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:56:04.759698 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:04.759707 1215935 round_trippers.go:580]     Audit-Id: 417b7611-55c2-46f0-9c59-29fb2ddf429f
	I0108 22:56:04.759714 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:04.759720 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:04.759729 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:04.759736 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:04.759742 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:04 GMT
	I0108 22:56:04.759882 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:04.760302 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:05.255747 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:05.255771 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:05.255782 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:05.255791 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:05.258365 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:05.258387 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:05.258395 1215935 round_trippers.go:580]     Audit-Id: 9d7f2dc6-c002-4dfd-941d-4c755f04676a
	I0108 22:56:05.258402 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:05.258408 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:05.258415 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:05.258422 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:05.258428 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:05 GMT
	I0108 22:56:05.258540 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:05.755694 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:05.755725 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:05.755743 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:05.755750 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:05.758124 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:05.758146 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:05.758154 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:05.758160 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:05.758166 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:05.758173 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:05 GMT
	I0108 22:56:05.758179 1215935 round_trippers.go:580]     Audit-Id: c2f097c3-e45d-4396-a059-7cd952d69cd8
	I0108 22:56:05.758185 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:05.758447 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:06.255708 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:06.255734 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:06.255743 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:06.255759 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:06.258050 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:06.258070 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:06.258078 1215935 round_trippers.go:580]     Audit-Id: 40a92b9d-826f-4fb2-a745-084677b8bc60
	I0108 22:56:06.258085 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:06.258091 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:06.258097 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:06.258103 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:06.258110 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:06 GMT
	I0108 22:56:06.258302 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:06.755551 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:06.755580 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:06.755590 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:06.755598 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:06.758058 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:06.758081 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:06.758089 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:06.758096 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:06.758102 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:06 GMT
	I0108 22:56:06.758108 1215935 round_trippers.go:580]     Audit-Id: 2dcc0a4b-45a6-4f25-b01c-72db00ca7488
	I0108 22:56:06.758114 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:06.758120 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:06.758259 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:07.255643 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:07.255668 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:07.255679 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:07.255686 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:07.258092 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:07.258118 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:07.258127 1215935 round_trippers.go:580]     Audit-Id: 9c0eb7eb-7c55-4fff-9816-8c653b6b50f5
	I0108 22:56:07.258134 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:07.258142 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:07.258149 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:07.258155 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:07.258170 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:07 GMT
	I0108 22:56:07.258348 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:07.258761 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:07.755396 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:07.755421 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:07.755431 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:07.755439 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:07.757848 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:07.757872 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:07.757880 1215935 round_trippers.go:580]     Audit-Id: 492692c1-d729-4ae1-ba01-d672ffc681f3
	I0108 22:56:07.757886 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:07.757893 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:07.757899 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:07.757909 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:07.757916 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:07 GMT
	I0108 22:56:07.758162 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:08.255827 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:08.255853 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:08.255863 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:08.255870 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:08.258385 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:08.258407 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:08.258416 1215935 round_trippers.go:580]     Audit-Id: 2301c3b6-1df1-4779-8556-ff8812aca1a3
	I0108 22:56:08.258422 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:08.258429 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:08.258435 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:08.258445 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:08.258452 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:08 GMT
	I0108 22:56:08.258693 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:08.755745 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:08.755770 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:08.755781 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:08.755788 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:08.758273 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:08.758296 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:08.758304 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:08.758311 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:08.758317 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:08.758327 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:08.758334 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:08 GMT
	I0108 22:56:08.758341 1215935 round_trippers.go:580]     Audit-Id: e5ff9c83-274e-480c-b695-508234a0f8d0
	I0108 22:56:08.761400 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:09.255307 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:09.255333 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:09.255344 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:09.255352 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:09.257867 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:09.257887 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:09.257895 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:09.257902 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:09.257908 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:09 GMT
	I0108 22:56:09.257915 1215935 round_trippers.go:580]     Audit-Id: 18b3dee4-3e10-4141-b0bc-e5164cbb0c98
	I0108 22:56:09.257921 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:09.257927 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:09.258099 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:09.755305 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:09.755329 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:09.755339 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:09.755346 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:09.757886 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:09.757914 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:09.757923 1215935 round_trippers.go:580]     Audit-Id: 88630b09-739e-4fb5-a1cd-fa4065b9b3fd
	I0108 22:56:09.757930 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:09.757936 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:09.757942 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:09.757949 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:09.757963 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:09 GMT
	I0108 22:56:09.758107 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:09.758492 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:10.256302 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:10.256324 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:10.256335 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:10.256342 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:10.258942 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:10.258968 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:10.258977 1215935 round_trippers.go:580]     Audit-Id: c88864d6-5b6f-4b46-9922-f5b6c57bf511
	I0108 22:56:10.258983 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:10.258989 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:10.258995 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:10.259002 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:10.259009 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:10 GMT
	I0108 22:56:10.259251 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:10.756171 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:10.756198 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:10.756210 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:10.756217 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:10.758680 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:10.758708 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:10.758718 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:10.758729 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:10.758737 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:10.758745 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:10 GMT
	I0108 22:56:10.758755 1215935 round_trippers.go:580]     Audit-Id: 8cb8c8a5-f1ca-4f68-8e95-b8e0b4dcc761
	I0108 22:56:10.758761 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:10.758880 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"476","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0108 22:56:11.255953 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:11.255979 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:11.255989 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:11.255997 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:11.258498 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:11.258519 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:11.258528 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:11.258535 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:11.258541 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:11.258548 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:11.258555 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:11 GMT
	I0108 22:56:11.258561 1215935 round_trippers.go:580]     Audit-Id: 20ceb9db-d1c1-4b3f-b3b0-d46a62675a67
	I0108 22:56:11.258691 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:11.755284 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:11.755311 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:11.755321 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:11.755328 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:11.757757 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:11.757776 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:11.757784 1215935 round_trippers.go:580]     Audit-Id: 92f5a65d-cb34-461b-93c5-320e254e7ff9
	I0108 22:56:11.757790 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:11.757796 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:11.757802 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:11.757808 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:11.757815 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:11 GMT
	I0108 22:56:11.757971 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:12.256168 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:12.256196 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:12.256206 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:12.256213 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:12.258677 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:12.258703 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:12.258711 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:12.258718 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:12.258724 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:12.258731 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:12 GMT
	I0108 22:56:12.258738 1215935 round_trippers.go:580]     Audit-Id: f1ed737b-0e46-4d2a-a237-58932c775909
	I0108 22:56:12.258746 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:12.258867 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:12.259303 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:12.756062 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:12.756085 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:12.756095 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:12.756103 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:12.758463 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:12.758484 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:12.758492 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:12 GMT
	I0108 22:56:12.758499 1215935 round_trippers.go:580]     Audit-Id: d38a628b-0a6e-481d-b946-049bef0273c6
	I0108 22:56:12.758505 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:12.758511 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:12.758517 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:12.758523 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:12.758668 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:13.255487 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:13.255512 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:13.255521 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:13.255528 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:13.257994 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:13.258019 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:13.258028 1215935 round_trippers.go:580]     Audit-Id: 68692491-6103-4a5e-9df6-3983dbb3eb21
	I0108 22:56:13.258034 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:13.258041 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:13.258047 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:13.258053 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:13.258059 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:13 GMT
	I0108 22:56:13.258216 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:13.755900 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:13.755926 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:13.755936 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:13.755943 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:13.758817 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:13.758838 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:13.758847 1215935 round_trippers.go:580]     Audit-Id: f4a273da-0f99-4faf-9297-5905a3aa3f58
	I0108 22:56:13.758853 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:13.758860 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:13.758866 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:13.758872 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:13.758879 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:13 GMT
	I0108 22:56:13.759005 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:14.256119 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:14.256142 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:14.256152 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:14.256160 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:14.258563 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:14.258585 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:14.258593 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:14 GMT
	I0108 22:56:14.258600 1215935 round_trippers.go:580]     Audit-Id: 713e3edd-767a-4175-baf0-3b8375a50d84
	I0108 22:56:14.258606 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:14.258612 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:14.258618 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:14.258624 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:14.258763 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:14.755815 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:14.755838 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:14.755848 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:14.755855 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:14.758380 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:14.758405 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:14.758414 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:14.758420 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:14.758426 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:14.758433 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:14 GMT
	I0108 22:56:14.758443 1215935 round_trippers.go:580]     Audit-Id: 31451a9b-aeed-43f6-b832-ea5808c3c6e4
	I0108 22:56:14.758449 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:14.758755 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:14.759159 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:15.256165 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:15.256202 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:15.256212 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:15.256219 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:15.258766 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:15.258788 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:15.258796 1215935 round_trippers.go:580]     Audit-Id: 009644c2-1420-4504-8aee-f086d31e6b0f
	I0108 22:56:15.258802 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:15.258808 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:15.258814 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:15.258820 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:15.258827 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:15 GMT
	I0108 22:56:15.259135 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:15.755259 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:15.755309 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:15.755320 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:15.755327 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:15.758059 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:15.758088 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:15.758102 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:15.758109 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:15.758116 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:15 GMT
	I0108 22:56:15.758122 1215935 round_trippers.go:580]     Audit-Id: 89b585b5-2bf3-4ca1-b11f-df43a9e4552d
	I0108 22:56:15.758129 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:15.758135 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:15.758274 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:16.255310 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:16.255334 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:16.255344 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:16.255367 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:16.257647 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:16.257668 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:16.257676 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:16 GMT
	I0108 22:56:16.257683 1215935 round_trippers.go:580]     Audit-Id: d8a21030-1e98-4694-ac15-d4a01cccaef1
	I0108 22:56:16.257689 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:16.257695 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:16.257701 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:16.257708 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:16.257837 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:16.755877 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:16.755901 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:16.755911 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:16.755918 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:16.758371 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:16.758394 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:16.758402 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:16.758409 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:16.758415 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:16.758423 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:16.758430 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:16 GMT
	I0108 22:56:16.758436 1215935 round_trippers.go:580]     Audit-Id: c5435b32-079d-4665-9b42-a9d2730c6d9f
	I0108 22:56:16.758563 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:17.255350 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:17.255377 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:17.255387 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:17.255394 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:17.257776 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:17.257799 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:17.257807 1215935 round_trippers.go:580]     Audit-Id: a7e201ef-1871-4896-adc5-390ec91574cc
	I0108 22:56:17.257813 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:17.257820 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:17.257828 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:17.257834 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:17.257841 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:17 GMT
	I0108 22:56:17.257972 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:17.258353 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:17.756126 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:17.756150 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:17.756161 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:17.756168 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:17.758747 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:17.758770 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:17.758778 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:17.758785 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:17 GMT
	I0108 22:56:17.758794 1215935 round_trippers.go:580]     Audit-Id: 2531597c-e2e5-4baf-a9ab-54da6e5f0327
	I0108 22:56:17.758801 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:17.758807 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:17.758816 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:17.758943 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:18.255636 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:18.255660 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:18.255670 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:18.255678 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:18.258169 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:18.258197 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:18.258206 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:18.258213 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:18 GMT
	I0108 22:56:18.258220 1215935 round_trippers.go:580]     Audit-Id: bc57a467-807b-48c1-9dbc-7fb300eccf92
	I0108 22:56:18.258227 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:18.258234 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:18.258240 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:18.258399 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:18.755310 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:18.755334 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:18.755344 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:18.755352 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:18.757896 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:18.757917 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:18.757925 1215935 round_trippers.go:580]     Audit-Id: ce9588eb-566d-4d04-91b8-d94bb10cf771
	I0108 22:56:18.757933 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:18.757940 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:18.757947 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:18.757955 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:18.757968 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:18 GMT
	I0108 22:56:18.758152 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:19.256332 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:19.256357 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:19.256366 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:19.256374 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:19.258901 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:19.258929 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:19.258938 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:19.258944 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:19.258951 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:19.258958 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:19 GMT
	I0108 22:56:19.258965 1215935 round_trippers.go:580]     Audit-Id: a269db1a-e0ac-4c7f-9dff-b5f6866da447
	I0108 22:56:19.258975 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:19.259317 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:19.259716 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:19.755958 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:19.755988 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:19.755998 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:19.756005 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:19.758527 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:19.758550 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:19.758559 1215935 round_trippers.go:580]     Audit-Id: 6b26323e-aed3-44c7-9395-7b51ed0e4a85
	I0108 22:56:19.758566 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:19.758576 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:19.758583 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:19.758592 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:19.758599 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:19 GMT
	I0108 22:56:19.758765 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:20.255656 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:20.255682 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:20.255692 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:20.255700 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:20.258298 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:20.258328 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:20.258337 1215935 round_trippers.go:580]     Audit-Id: 54bca550-66ed-402e-bca8-1e1c9cee0c01
	I0108 22:56:20.258344 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:20.258361 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:20.258367 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:20.258374 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:20.258389 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:20 GMT
	I0108 22:56:20.258524 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:20.755859 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:20.755894 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:20.755905 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:20.755913 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:20.758529 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:20.758550 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:20.758559 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:20.758565 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:20 GMT
	I0108 22:56:20.758572 1215935 round_trippers.go:580]     Audit-Id: 81c1aacd-aa5b-4309-bbbd-4b6e849de6a3
	I0108 22:56:20.758578 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:20.758585 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:20.758591 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:20.758694 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:21.255288 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:21.255326 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:21.255336 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:21.255351 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:21.257828 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:21.257851 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:21.257859 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:21.257866 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:21 GMT
	I0108 22:56:21.257872 1215935 round_trippers.go:580]     Audit-Id: 288fe83d-18f0-4ff2-a758-c0cd48dd2eda
	I0108 22:56:21.257878 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:21.257885 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:21.257903 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:21.258238 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:21.755986 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:21.756016 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:21.756032 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:21.756040 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:21.759009 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:21.759033 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:21.759042 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:21.759049 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:21.759055 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:21.759062 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:21 GMT
	I0108 22:56:21.759068 1215935 round_trippers.go:580]     Audit-Id: cd80c9cf-114b-48ee-a150-280a7b3e199b
	I0108 22:56:21.759074 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:21.759215 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:21.759651 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:22.255717 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:22.255741 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:22.255752 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:22.255759 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:22.258330 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:22.258355 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:22.258364 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:22.258371 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:22 GMT
	I0108 22:56:22.258378 1215935 round_trippers.go:580]     Audit-Id: 1b34ebfe-16b2-459f-935c-ab524802fc53
	I0108 22:56:22.258384 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:22.258391 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:22.258406 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:22.258534 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:22.755347 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:22.755371 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:22.755382 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:22.755390 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:22.758073 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:22.758101 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:22.758110 1215935 round_trippers.go:580]     Audit-Id: e3721386-6171-4faa-b5f0-7a52a06087a2
	I0108 22:56:22.758116 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:22.758122 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:22.758128 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:22.758135 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:22.758142 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:22 GMT
	I0108 22:56:22.758423 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:23.256129 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:23.256156 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:23.256166 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:23.256174 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:23.258752 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:23.258776 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:23.258784 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:23.258791 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:23.258797 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:23 GMT
	I0108 22:56:23.258804 1215935 round_trippers.go:580]     Audit-Id: c2823710-59db-401b-9a1a-0795c0db2d37
	I0108 22:56:23.258814 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:23.258828 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:23.259121 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:23.756082 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:23.756129 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:23.756140 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:23.756147 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:23.758628 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:23.758649 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:23.758657 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:23.758664 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:23 GMT
	I0108 22:56:23.758670 1215935 round_trippers.go:580]     Audit-Id: 584746e6-33b0-415b-8ba1-e5f5fa8db837
	I0108 22:56:23.758677 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:23.758683 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:23.758692 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:23.759015 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:24.255656 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:24.255690 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:24.255701 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:24.255708 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:24.258273 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:24.258304 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:24.258314 1215935 round_trippers.go:580]     Audit-Id: 589a1733-b398-4f83-b5cd-0ee54fd79538
	I0108 22:56:24.258320 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:24.258327 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:24.258333 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:24.258343 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:24.258354 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:24 GMT
	I0108 22:56:24.258481 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:24.258884 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:24.755341 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:24.755364 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:24.755374 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:24.755388 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:24.757818 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:24.757842 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:24.757851 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:24.757857 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:24.757864 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:24.757870 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:24.757877 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:24 GMT
	I0108 22:56:24.757883 1215935 round_trippers.go:580]     Audit-Id: 2d69130c-b143-4a73-a219-7e4daae404af
	I0108 22:56:24.758009 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:25.256097 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:25.256122 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:25.256137 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:25.256145 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:25.258626 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:25.258648 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:25.258657 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:25.258664 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:25.258670 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:25.258676 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:25.258683 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:25 GMT
	I0108 22:56:25.258689 1215935 round_trippers.go:580]     Audit-Id: ec1aa892-ed6a-425c-a05c-e923d1344602
	I0108 22:56:25.258811 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:25.755924 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:25.755952 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:25.755962 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:25.755970 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:25.758442 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:25.758468 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:25.758476 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:25 GMT
	I0108 22:56:25.758483 1215935 round_trippers.go:580]     Audit-Id: b35b4a65-9144-44bb-bdca-8b928b2fef6d
	I0108 22:56:25.758490 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:25.758544 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:25.758557 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:25.758563 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:25.758704 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:26.255801 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:26.255822 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:26.255832 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:26.255839 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:26.258218 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:26.258243 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:26.258251 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:26.258258 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:26.258264 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:26.258270 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:26.258277 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:26 GMT
	I0108 22:56:26.258283 1215935 round_trippers.go:580]     Audit-Id: f30d357d-7479-4911-bafa-f1259e16ca75
	I0108 22:56:26.258453 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:26.755319 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:26.755343 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:26.755352 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:26.755359 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:26.757805 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:26.757828 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:26.757836 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:26.757843 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:26.757849 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:26 GMT
	I0108 22:56:26.757855 1215935 round_trippers.go:580]     Audit-Id: f1b102b1-dd76-46b8-9aa9-3dd383369d47
	I0108 22:56:26.757861 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:26.757868 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:26.758639 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:26.759033 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:27.255669 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:27.255693 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:27.255705 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:27.255712 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:27.258438 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:27.258468 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:27.258478 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:27.258484 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:27.258491 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:27.258498 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:27 GMT
	I0108 22:56:27.258505 1215935 round_trippers.go:580]     Audit-Id: abe6ca57-1b55-45f1-9c5e-f0e623174135
	I0108 22:56:27.258512 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:27.258868 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:27.755750 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:27.755774 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:27.755784 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:27.755791 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:27.758215 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:27.758239 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:27.758247 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:27.758254 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:27.758261 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:27 GMT
	I0108 22:56:27.758268 1215935 round_trippers.go:580]     Audit-Id: 165a33a3-ae10-4d44-a3a3-79338d3a3b9a
	I0108 22:56:27.758274 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:27.758281 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:27.758563 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:28.255894 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:28.255922 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:28.255932 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:28.255940 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:28.258723 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:28.258746 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:28.258754 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:28.258761 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:28 GMT
	I0108 22:56:28.258767 1215935 round_trippers.go:580]     Audit-Id: 1d7f625a-e45e-4ee1-b073-ff784ecae3fd
	I0108 22:56:28.258774 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:28.258780 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:28.258786 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:28.258921 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:28.755306 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:28.755333 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:28.755344 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:28.755351 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:28.758037 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:28.758057 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:28.758066 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:28.758072 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:28.758078 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:28.758085 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:28 GMT
	I0108 22:56:28.758091 1215935 round_trippers.go:580]     Audit-Id: f863d2b8-b28b-45f1-ae83-0b987f6baf26
	I0108 22:56:28.758098 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:28.758220 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:29.255882 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:29.255912 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:29.255922 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:29.255929 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:29.258398 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:29.258424 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:29.258433 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:29.258440 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:29.258446 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:29.258455 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:29 GMT
	I0108 22:56:29.258465 1215935 round_trippers.go:580]     Audit-Id: 5613dfe7-7f94-41f3-9a66-d5e29c7f3e82
	I0108 22:56:29.258471 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:29.258569 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:29.258971 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:29.755325 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:29.755351 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:29.755361 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:29.755368 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:29.757874 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:29.757899 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:29.757908 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:29.757915 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:29.757921 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:29.757928 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:29 GMT
	I0108 22:56:29.757951 1215935 round_trippers.go:580]     Audit-Id: 14de2b6e-36d0-4f8d-80a8-dcc5a20abb9d
	I0108 22:56:29.757963 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:29.758098 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:30.255843 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:30.255870 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:30.255886 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:30.255894 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:30.258689 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:30.258713 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:30.258722 1215935 round_trippers.go:580]     Audit-Id: 206777ec-b39b-4b23-84e1-a32c81042e2d
	I0108 22:56:30.258728 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:30.258735 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:30.258741 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:30.258748 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:30.258754 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:30 GMT
	I0108 22:56:30.258869 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:30.755616 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:30.755642 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:30.755652 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:30.755659 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:30.758161 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:30.758182 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:30.758190 1215935 round_trippers.go:580]     Audit-Id: 31cf9492-fe09-4394-844c-0312ddb6923b
	I0108 22:56:30.758197 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:30.758203 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:30.758210 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:30.758217 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:30.758223 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:30 GMT
	I0108 22:56:30.758370 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:31.256299 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:31.256326 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:31.256339 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:31.256358 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:31.260336 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:56:31.260374 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:31.260402 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:31 GMT
	I0108 22:56:31.260409 1215935 round_trippers.go:580]     Audit-Id: 71f790da-5dd9-4a95-8752-2897040abe6d
	I0108 22:56:31.260415 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:31.260422 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:31.260429 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:31.260451 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:31.260702 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:31.261351 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:31.755975 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:31.756001 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:31.756011 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:31.756019 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:31.758672 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:31.758701 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:31.758710 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:31.758716 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:31.758722 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:31.758729 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:31.758736 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:31 GMT
	I0108 22:56:31.758748 1215935 round_trippers.go:580]     Audit-Id: 3d55c792-3844-4650-a9f4-9446efeb54fa
	I0108 22:56:31.758863 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:32.256138 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:32.256162 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:32.256173 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:32.256180 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:32.258737 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:32.258763 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:32.258771 1215935 round_trippers.go:580]     Audit-Id: 3a8c7e80-505f-44a7-9226-b7f1567d00a3
	I0108 22:56:32.258778 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:32.258784 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:32.258791 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:32.258799 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:32.258806 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:32 GMT
	I0108 22:56:32.258947 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:32.756104 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:32.756131 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:32.756142 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:32.756149 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:32.758702 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:32.758724 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:32.758732 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:32.758740 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:32 GMT
	I0108 22:56:32.758746 1215935 round_trippers.go:580]     Audit-Id: 4e462505-9ca0-4722-9325-8c428e20cc31
	I0108 22:56:32.758755 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:32.758761 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:32.758767 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:32.758920 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:33.256176 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:33.256202 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:33.256212 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:33.256220 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:33.259088 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:33.259113 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:33.259121 1215935 round_trippers.go:580]     Audit-Id: cc7ec294-a984-49c0-a632-cb288b82d0c8
	I0108 22:56:33.259127 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:33.259133 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:33.259141 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:33.259147 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:33.259154 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:33 GMT
	I0108 22:56:33.259291 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:33.755336 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:33.755362 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:33.755372 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:33.755379 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:33.757901 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:33.757924 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:33.757933 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:33.757940 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:33 GMT
	I0108 22:56:33.757946 1215935 round_trippers.go:580]     Audit-Id: 667bdd21-5b95-4ff5-93e1-8d43d4d3fe06
	I0108 22:56:33.757952 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:33.757958 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:33.757965 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:33.758124 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:33.758532 1215935 node_ready.go:58] node "multinode-265402-m02" has status "Ready":"False"
	I0108 22:56:34.256284 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:34.256314 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:34.256325 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:34.256338 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:34.259695 1215935 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 22:56:34.259718 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:34.259727 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:34.259734 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:34.259740 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:34.259746 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:34 GMT
	I0108 22:56:34.259752 1215935 round_trippers.go:580]     Audit-Id: 0b69d805-1e13-4f49-83a2-41ee19ba86e5
	I0108 22:56:34.259759 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:34.259893 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:34.756069 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:34.756091 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:34.756101 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:34.756126 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:34.758565 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:34.758594 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:34.758603 1215935 round_trippers.go:580]     Audit-Id: fc6d95fb-9b25-4d3d-bfef-afd9713653b1
	I0108 22:56:34.758610 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:34.758616 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:34.758630 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:34.758636 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:34.758644 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:34 GMT
	I0108 22:56:34.759003 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"490","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0108 22:56:35.255582 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:35.255608 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.255648 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.255661 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.258162 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.258186 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.258196 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.258228 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.258235 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.258245 1215935 round_trippers.go:580]     Audit-Id: 2628f47a-6bbd-4774-bb68-fa71788be495
	I0108 22:56:35.258253 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.258266 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.258613 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"512","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0108 22:56:35.259027 1215935 node_ready.go:49] node "multinode-265402-m02" has status "Ready":"True"
	I0108 22:56:35.259047 1215935 node_ready.go:38] duration metric: took 32.503959038s waiting for node "multinode-265402-m02" to be "Ready" ...
	I0108 22:56:35.259058 1215935 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:56:35.259125 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0108 22:56:35.259137 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.259146 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.259153 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.263329 1215935 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 22:56:35.263361 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.263369 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.263376 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.263386 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.263392 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.263398 1215935 round_trippers.go:580]     Audit-Id: b1e3edc9-5175-4edf-8e55-fd0e0db4d622
	I0108 22:56:35.263409 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.264432 1215935 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"513"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jxvsh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e30c3eb4-3f0a-40da-b222-8987a1951271","resourceVersion":"399","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0108 22:56:35.268369 1215935 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jxvsh" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.268470 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jxvsh
	I0108 22:56:35.268486 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.268495 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.268503 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.275734 1215935 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 22:56:35.275775 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.275784 1215935 round_trippers.go:580]     Audit-Id: 8368b685-3a4a-41a2-acbc-1a9f9ccac625
	I0108 22:56:35.275791 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.275797 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.275803 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.275812 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.275818 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.275957 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jxvsh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"e30c3eb4-3f0a-40da-b222-8987a1951271","resourceVersion":"399","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"820eb3e9-b72f-4b23-afa9-13320041ac8c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"820eb3e9-b72f-4b23-afa9-13320041ac8c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0108 22:56:35.276491 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:35.276533 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.276548 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.276556 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.278838 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.278883 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.278891 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.278898 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.278905 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.278915 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.278927 1215935 round_trippers.go:580]     Audit-Id: a73a0f05-485f-48b6-b9bd-5b756a4d8763
	I0108 22:56:35.278933 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.279165 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:56:35.279632 1215935 pod_ready.go:92] pod "coredns-5dd5756b68-jxvsh" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:35.279650 1215935 pod_ready.go:81] duration metric: took 11.251517ms waiting for pod "coredns-5dd5756b68-jxvsh" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.279661 1215935 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.279728 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-265402
	I0108 22:56:35.279738 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.279746 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.279753 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.282206 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.282230 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.282237 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.282244 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.282250 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.282273 1215935 round_trippers.go:580]     Audit-Id: 72629191-434c-4110-acf0-b0a2d28ba944
	I0108 22:56:35.282287 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.282294 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.282416 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-265402","namespace":"kube-system","uid":"f0b50c9c-24ac-44c2-97d3-fd32c3fd1783","resourceVersion":"274","creationTimestamp":"2024-01-08T22:54:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9fbc9ea72760953590c9db956803870a","kubernetes.io/config.mirror":"9fbc9ea72760953590c9db956803870a","kubernetes.io/config.seen":"2024-01-08T22:54:49.639698825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0108 22:56:35.282932 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:35.282946 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.282955 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.282962 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.285474 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.285499 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.285508 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.285515 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.285521 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.285528 1215935 round_trippers.go:580]     Audit-Id: b976ad22-be58-4ddc-8efa-dfdfc0de8c48
	I0108 22:56:35.285534 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.285543 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.285766 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:56:35.286159 1215935 pod_ready.go:92] pod "etcd-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:35.286178 1215935 pod_ready.go:81] duration metric: took 6.50645ms waiting for pod "etcd-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.286195 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.286261 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-265402
	I0108 22:56:35.286269 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.286277 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.286284 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.288928 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.288981 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.289021 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.289030 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.289037 1215935 round_trippers.go:580]     Audit-Id: 9eb2f13e-c94d-4759-85ae-1de433d3d5cc
	I0108 22:56:35.289064 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.289079 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.289086 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.289246 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-265402","namespace":"kube-system","uid":"f89662f8-7aea-4d49-b0bc-369a5a93317e","resourceVersion":"267","creationTimestamp":"2024-01-08T22:54:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"153e2a2653c145307af823d6bdf14ecf","kubernetes.io/config.mirror":"153e2a2653c145307af823d6bdf14ecf","kubernetes.io/config.seen":"2024-01-08T22:54:49.639704773Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0108 22:56:35.289808 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:35.289824 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.289833 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.289840 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.292306 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.292324 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.292345 1215935 round_trippers.go:580]     Audit-Id: 25e4a3c1-5e77-4072-99cb-6157975f14d9
	I0108 22:56:35.292353 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.292359 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.292369 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.292378 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.292385 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.292543 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:56:35.292930 1215935 pod_ready.go:92] pod "kube-apiserver-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:35.292953 1215935 pod_ready.go:81] duration metric: took 6.746259ms waiting for pod "kube-apiserver-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.292964 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.293054 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-265402
	I0108 22:56:35.293064 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.293074 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.293081 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.295453 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.295476 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.295485 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.295492 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.295499 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.295506 1215935 round_trippers.go:580]     Audit-Id: 3abdcd02-5f26-4ff8-8fe8-799d72e39c3d
	I0108 22:56:35.295512 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.295518 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.295805 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-265402","namespace":"kube-system","uid":"bd5f7307-7f0b-4bd9-a40a-0c7adf289bc5","resourceVersion":"271","creationTimestamp":"2024-01-08T22:54:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6ec989d2fba992978bd611606acd568a","kubernetes.io/config.mirror":"6ec989d2fba992978bd611606acd568a","kubernetes.io/config.seen":"2024-01-08T22:54:49.639706037Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0108 22:56:35.296360 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:35.296376 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.296384 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.296392 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.298681 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.298737 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.298759 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.298781 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.298816 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.298827 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.298834 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.298841 1215935 round_trippers.go:580]     Audit-Id: eba7e3e6-76e7-4188-b808-8778957813c8
	I0108 22:56:35.299002 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:56:35.299406 1215935 pod_ready.go:92] pod "kube-controller-manager-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:35.299426 1215935 pod_ready.go:81] duration metric: took 6.451279ms waiting for pod "kube-controller-manager-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.299439 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rxh22" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.455677 1215935 request.go:629] Waited for 156.166236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxh22
	I0108 22:56:35.455756 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rxh22
	I0108 22:56:35.455765 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.455774 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.455781 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.458472 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.458495 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.458503 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.458510 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.458516 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.458523 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.458549 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.458556 1215935 round_trippers.go:580]     Audit-Id: 10d81edf-9bca-4850-8ac1-679d323dd7ab
	I0108 22:56:35.458736 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rxh22","generateName":"kube-proxy-","namespace":"kube-system","uid":"c9bb5b66-d21e-4304-8b6e-5e66f4992ee6","resourceVersion":"473","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d75724e6-2fba-4f4e-b9af-60383dc2d915","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75724e6-2fba-4f4e-b9af-60383dc2d915\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 22:56:35.656630 1215935 request.go:629] Waited for 197.324719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:35.656690 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402-m02
	I0108 22:56:35.656696 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.656712 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.656720 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.659344 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.659369 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.659377 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.659384 1215935 round_trippers.go:580]     Audit-Id: 38390f15-c0e5-4ebd-8ddc-884edecfb22b
	I0108 22:56:35.659390 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.659403 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.659410 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.659416 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.659599 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402-m02","uid":"d6711ea5-775a-4acb-a9d6-cc661f9e3479","resourceVersion":"512","creationTimestamp":"2024-01-08T22:56:01Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T22_56_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:56:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0108 22:56:35.660008 1215935 pod_ready.go:92] pod "kube-proxy-rxh22" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:35.660025 1215935 pod_ready.go:81] duration metric: took 360.575987ms waiting for pod "kube-proxy-rxh22" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.660035 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-shpdw" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:35.855879 1215935 request.go:629] Waited for 195.735766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shpdw
	I0108 22:56:35.855984 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-shpdw
	I0108 22:56:35.856022 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:35.856036 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:35.856059 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:35.858739 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:35.858801 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:35.858851 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:35 GMT
	I0108 22:56:35.858882 1215935 round_trippers.go:580]     Audit-Id: f56a28cb-40c3-4b0b-b139-d67de82d8ab3
	I0108 22:56:35.858906 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:35.858928 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:35.858949 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:35.858980 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:35.859151 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-shpdw","generateName":"kube-proxy-","namespace":"kube-system","uid":"6d87b28d-e3f3-48e7-9d07-f96a102d9294","resourceVersion":"364","creationTimestamp":"2024-01-08T22:55:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d75724e6-2fba-4f4e-b9af-60383dc2d915","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75724e6-2fba-4f4e-b9af-60383dc2d915\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0108 22:56:36.055935 1215935 request.go:629] Waited for 196.246638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:36.056001 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:36.056013 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:36.056023 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:36.056034 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:36.058907 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:36.058944 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:36.058953 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:36.058960 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:36 GMT
	I0108 22:56:36.058966 1215935 round_trippers.go:580]     Audit-Id: 9f349919-7bab-4b9e-a009-624f71a6ff3e
	I0108 22:56:36.058973 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:36.058979 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:36.058990 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:36.059188 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:56:36.059600 1215935 pod_ready.go:92] pod "kube-proxy-shpdw" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:36.059622 1215935 pod_ready.go:81] duration metric: took 399.578094ms waiting for pod "kube-proxy-shpdw" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:36.059634 1215935 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:36.256514 1215935 request.go:629] Waited for 196.814413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265402
	I0108 22:56:36.256597 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265402
	I0108 22:56:36.256625 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:36.256634 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:36.256642 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:36.258957 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:36.259025 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:36.259047 1215935 round_trippers.go:580]     Audit-Id: fd7967a1-d4e1-4a86-81ec-6cbf2881a6e5
	I0108 22:56:36.259072 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:36.259095 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:36.259102 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:36.259109 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:36.259115 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:36 GMT
	I0108 22:56:36.259242 1215935 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-265402","namespace":"kube-system","uid":"8fd809b7-494a-4b6e-a556-cdb385c37788","resourceVersion":"275","creationTimestamp":"2024-01-08T22:54:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7755fe167392a0b43d8453d49a1480f3","kubernetes.io/config.mirror":"7755fe167392a0b43d8453d49a1480f3","kubernetes.io/config.seen":"2024-01-08T22:54:58.076778153Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T22:54:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0108 22:56:36.456019 1215935 request.go:629] Waited for 196.312713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:36.456096 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265402
	I0108 22:56:36.456124 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:36.456134 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:36.456143 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:36.458664 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:36.458688 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:36.458697 1215935 round_trippers.go:580]     Audit-Id: 0601fe6d-175b-4194-9444-9e33b174f48b
	I0108 22:56:36.458704 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:36.458710 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:36.458717 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:36.458723 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:36.458733 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:36 GMT
	I0108 22:56:36.458879 1215935 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T22:54:54Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0108 22:56:36.459287 1215935 pod_ready.go:92] pod "kube-scheduler-multinode-265402" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:36.459304 1215935 pod_ready.go:81] duration metric: took 399.662894ms waiting for pod "kube-scheduler-multinode-265402" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:36.459316 1215935 pod_ready.go:38] duration metric: took 1.200246165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:56:36.459332 1215935 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:56:36.459394 1215935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:56:36.473417 1215935 system_svc.go:56] duration metric: took 14.077049ms WaitForService to wait for kubelet.
	I0108 22:56:36.473447 1215935 kubeadm.go:581] duration metric: took 33.737242591s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:56:36.473470 1215935 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:56:36.655905 1215935 request.go:629] Waited for 182.324911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0108 22:56:36.655978 1215935 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0108 22:56:36.655990 1215935 round_trippers.go:469] Request Headers:
	I0108 22:56:36.656000 1215935 round_trippers.go:473]     Accept: application/json, */*
	I0108 22:56:36.656011 1215935 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0108 22:56:36.658723 1215935 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 22:56:36.658747 1215935 round_trippers.go:577] Response Headers:
	I0108 22:56:36.658755 1215935 round_trippers.go:580]     Audit-Id: 6211702a-c7a2-4598-a36f-234b541ca0cf
	I0108 22:56:36.658761 1215935 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 22:56:36.658768 1215935 round_trippers.go:580]     Content-Type: application/json
	I0108 22:56:36.658774 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f4abca55-6df2-40d9-be3e-cb6349e57319
	I0108 22:56:36.658780 1215935 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c9d4b9b5-2cb6-483e-be7e-d50479043310
	I0108 22:56:36.658796 1215935 round_trippers.go:580]     Date: Mon, 08 Jan 2024 22:56:36 GMT
	I0108 22:56:36.659145 1215935 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"514"},"items":[{"metadata":{"name":"multinode-265402","uid":"fcabdddc-1e69-470e-8b8c-768fe4c3a7d0","resourceVersion":"377","creationTimestamp":"2024-01-08T22:54:54Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-265402","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-265402","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T22_54_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13004 chars]
	I0108 22:56:36.659880 1215935 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 22:56:36.659902 1215935 node_conditions.go:123] node cpu capacity is 2
	I0108 22:56:36.659916 1215935 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0108 22:56:36.659921 1215935 node_conditions.go:123] node cpu capacity is 2
	I0108 22:56:36.659925 1215935 node_conditions.go:105] duration metric: took 186.450684ms to run NodePressure ...
	I0108 22:56:36.659936 1215935 start.go:228] waiting for startup goroutines ...
	I0108 22:56:36.659968 1215935 start.go:242] writing updated cluster config ...
	I0108 22:56:36.660284 1215935 ssh_runner.go:195] Run: rm -f paused
	I0108 22:56:36.724238 1215935 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:56:36.727106 1215935 out.go:177] * Done! kubectl is now configured to use "multinode-265402" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 08 22:56:07 multinode-265402 crio[903]: time="2024-01-08 22:56:07.336664579Z" level=info msg="Got pod network &{Name:coredns-5dd5756b68-dhbdf Namespace:kube-system ID:d372ae7acb22ecef2b3648d1939e33dff1344908b50c959661e7d3e0de6647ca UID:49fd4e2f-0617-4904-8f59-17192c16fa4f NetNS:/var/run/netns/f5ae99e0-6a6b-458d-8144-ca9202848a30 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 22:56:07 multinode-265402 crio[903]: time="2024-01-08 22:56:07.336810637Z" level=info msg="Deleting pod kube-system_coredns-5dd5756b68-dhbdf from CNI network \"kindnet\" (type=ptp)"
	Jan 08 22:56:07 multinode-265402 crio[903]: time="2024-01-08 22:56:07.362597630Z" level=info msg="Stopped pod sandbox: d372ae7acb22ecef2b3648d1939e33dff1344908b50c959661e7d3e0de6647ca" id=a79f1316-7cf8-4c17-b5cd-3519fec07f39 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 08 22:56:08 multinode-265402 crio[903]: time="2024-01-08 22:56:08.360259025Z" level=info msg="Removing container: 83e9145c4665858dac2c7c34afdc3a3b78924f2ab1a98d4ed93def1a223c4b01" id=aaeee499-ba2e-4d40-bf4e-343653c9f2bd name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:56:08 multinode-265402 crio[903]: time="2024-01-08 22:56:08.383179460Z" level=info msg="Removed container 83e9145c4665858dac2c7c34afdc3a3b78924f2ab1a98d4ed93def1a223c4b01: kube-system/coredns-5dd5756b68-dhbdf/coredns" id=aaeee499-ba2e-4d40-bf4e-343653c9f2bd name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.951990508Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-5qwgb/POD" id=b6a83ea2-27a3-4f56-abb6-f8e9ec73844f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.952046130Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.970968882Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-5qwgb Namespace:default ID:953ba608bbac1f34b34ee66fed68534e1b4c14a08597fcd752f4392d76c40671 UID:c0d04042-f2d0-47cb-96d0-f4a15760e907 NetNS:/var/run/netns/933a8ca4-81c5-4fd3-a281-b86b63cf6768 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.971010071Z" level=info msg="Adding pod default_busybox-5bc68d56bd-5qwgb to CNI network \"kindnet\" (type=ptp)"
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.981208074Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-5qwgb Namespace:default ID:953ba608bbac1f34b34ee66fed68534e1b4c14a08597fcd752f4392d76c40671 UID:c0d04042-f2d0-47cb-96d0-f4a15760e907 NetNS:/var/run/netns/933a8ca4-81c5-4fd3-a281-b86b63cf6768 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.981367827Z" level=info msg="Checking pod default_busybox-5bc68d56bd-5qwgb for CNI network kindnet (type=ptp)"
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.983793205Z" level=info msg="Ran pod sandbox 953ba608bbac1f34b34ee66fed68534e1b4c14a08597fcd752f4392d76c40671 with infra container: default/busybox-5bc68d56bd-5qwgb/POD" id=b6a83ea2-27a3-4f56-abb6-f8e9ec73844f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.987337829Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=7e44c52e-247c-4c47-8069-ea641f2ce51e name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.987555575Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=7e44c52e-247c-4c47-8069-ea641f2ce51e name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.988519548Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=654c4829-e2af-4602-9496-79ce6da4369c name=/runtime.v1.ImageService/PullImage
	Jan 08 22:56:37 multinode-265402 crio[903]: time="2024-01-08 22:56:37.989515209Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 22:56:38 multinode-265402 crio[903]: time="2024-01-08 22:56:38.582669518Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.696686219Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=654c4829-e2af-4602-9496-79ce6da4369c name=/runtime.v1.ImageService/PullImage
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.697750705Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1bdb6351-a8e5-422e-a617-230eed9af437 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.698449588Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1bdb6351-a8e5-422e-a617-230eed9af437 name=/runtime.v1.ImageService/ImageStatus
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.699224877Z" level=info msg="Creating container: default/busybox-5bc68d56bd-5qwgb/busybox" id=3bd9ae2a-5b12-44fa-aad6-e2910db89fc8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.699307453Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.773861730Z" level=info msg="Created container 53b0184a417d7fddeadf95bac1568dd04ea74747964c5feb407bc99ea5a0e169: default/busybox-5bc68d56bd-5qwgb/busybox" id=3bd9ae2a-5b12-44fa-aad6-e2910db89fc8 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.774597347Z" level=info msg="Starting container: 53b0184a417d7fddeadf95bac1568dd04ea74747964c5feb407bc99ea5a0e169" id=5a4d3eb0-de28-4a6c-917b-95e50e57e25e name=/runtime.v1.RuntimeService/StartContainer
	Jan 08 22:56:39 multinode-265402 crio[903]: time="2024-01-08 22:56:39.782981233Z" level=info msg="Started container" PID=2197 containerID=53b0184a417d7fddeadf95bac1568dd04ea74747964c5feb407bc99ea5a0e169 description=default/busybox-5bc68d56bd-5qwgb/busybox id=5a4d3eb0-de28-4a6c-917b-95e50e57e25e name=/runtime.v1.RuntimeService/StartContainer sandboxID=953ba608bbac1f34b34ee66fed68534e1b4c14a08597fcd752f4392d76c40671
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	53b0184a417d7       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   953ba608bbac1       busybox-5bc68d56bd-5qwgb
	99b3401b8c2b5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   45a8d1cd42e3c       coredns-5dd5756b68-jxvsh
	993c63880dafb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   bbdb09becba48       storage-provisioner
	c93266a6ec577       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   9b367a1bc5fff       kindnet-q4lsx
	4e28d7bb1407c       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   ecac43dbf1ec7       kube-proxy-shpdw
	7f9a6970d4f37       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   37ec0c67abe63       kube-controller-manager-multinode-265402
	0c8c908f402b2       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   b390f99ab8a5e       kube-scheduler-multinode-265402
	e6725dd95b9ce       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   f1f3bb405244b       kube-apiserver-multinode-265402
	dfb60fcada599       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   5c0f9e362cb58       etcd-multinode-265402
	
	
	==> coredns [99b3401b8c2b5bd02eff40e1bcfc5a7badde56a0176599326b0b002f4205dccc] <==
	[INFO] 10.244.0.4:54883 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009705s
	[INFO] 10.244.1.2:35447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117333s
	[INFO] 10.244.1.2:53169 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001120116s
	[INFO] 10.244.1.2:41454 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071803s
	[INFO] 10.244.1.2:51747 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006404s
	[INFO] 10.244.1.2:47596 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0023734s
	[INFO] 10.244.1.2:54138 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060734s
	[INFO] 10.244.1.2:32815 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062342s
	[INFO] 10.244.1.2:52938 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006281s
	[INFO] 10.244.0.4:41606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162501s
	[INFO] 10.244.0.4:33951 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105911s
	[INFO] 10.244.0.4:39936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063556s
	[INFO] 10.244.0.4:43371 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068127s
	[INFO] 10.244.1.2:34019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093767s
	[INFO] 10.244.1.2:43770 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065681s
	[INFO] 10.244.1.2:53245 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059192s
	[INFO] 10.244.1.2:33163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062868s
	[INFO] 10.244.0.4:56340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081468s
	[INFO] 10.244.0.4:33353 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000115208s
	[INFO] 10.244.0.4:35672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108635s
	[INFO] 10.244.0.4:56258 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079721s
	[INFO] 10.244.1.2:46306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118424s
	[INFO] 10.244.1.2:46471 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066387s
	[INFO] 10.244.1.2:36729 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000072566s
	[INFO] 10.244.1.2:49946 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066773s
	
	
	==> describe nodes <==
	Name:               multinode-265402
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-265402
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-265402
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_54_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:54:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-265402
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:56:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:55:42 +0000   Mon, 08 Jan 2024 22:54:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:55:42 +0000   Mon, 08 Jan 2024 22:54:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:55:42 +0000   Mon, 08 Jan 2024 22:54:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:55:42 +0000   Mon, 08 Jan 2024 22:55:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-265402
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa3cbbcf1fea49ee9d6690b9272989bd
	  System UUID:                714206d2-8610-41a0-a6bc-cce01e09f203
	  Boot ID:                    cf8959e1-1119-4140-86a9-5e54dd11ba57
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5qwgb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-jxvsh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     95s
	  kube-system                 etcd-multinode-265402                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-q4lsx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      95s
	  kube-system                 kube-apiserver-multinode-265402             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-multinode-265402    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-proxy-shpdw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-scheduler-multinode-265402             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 93s   kube-proxy       
	  Normal  Starting                 107s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s  kubelet          Node multinode-265402 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s  kubelet          Node multinode-265402 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s  kubelet          Node multinode-265402 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           96s   node-controller  Node multinode-265402 event: Registered Node multinode-265402 in Controller
	  Normal  NodeReady                63s   kubelet          Node multinode-265402 status is now: NodeReady
	
	
	Name:               multinode-265402-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-265402-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-265402
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T22_56_01_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:56:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-265402-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:56:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:56:34 +0000   Mon, 08 Jan 2024 22:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:56:34 +0000   Mon, 08 Jan 2024 22:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:56:34 +0000   Mon, 08 Jan 2024 22:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:56:34 +0000   Mon, 08 Jan 2024 22:56:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-265402-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f26285a63e74c84841a5d035afb262a
	  System UUID:                b56a9a12-accd-44e8-9596-5a910b497f11
	  Boot ID:                    cf8959e1-1119-4140-86a9-5e54dd11ba57
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-kcr7b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-zn2gf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      44s
	  kube-system                 kube-proxy-rxh22            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  44s (x5 over 46s)  kubelet          Node multinode-265402-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x5 over 46s)  kubelet          Node multinode-265402-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x5 over 46s)  kubelet          Node multinode-265402-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node multinode-265402-m02 event: Registered Node multinode-265402-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-265402-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001218] FS-Cache: O-key=[8] 'ee3f5c0100000000'
	[  +0.000867] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001067] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000079e87184
	[  +0.001231] FS-Cache: N-key=[8] 'ee3f5c0100000000'
	[  +0.004146] FS-Cache: Duplicate cookie detected
	[  +0.000852] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001155] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000b0066836
	[  +0.001211] FS-Cache: O-key=[8] 'ee3f5c0100000000'
	[  +0.000822] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001155] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000767cf050
	[  +0.001229] FS-Cache: N-key=[8] 'ee3f5c0100000000'
	[  +3.371742] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000f6accb7c
	[  +0.001152] FS-Cache: O-key=[8] 'ed3f5c0100000000'
	[  +0.000838] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=00000000450c6a04
	[  +0.001169] FS-Cache: N-key=[8] 'ed3f5c0100000000'
	[  +0.456928] FS-Cache: Duplicate cookie detected
	[  +0.000821] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001066] FS-Cache: O-cookie d=0000000018ded0e0{9p.inode} n=00000000ee2dae65
	[  +0.001165] FS-Cache: O-key=[8] 'f33f5c0100000000'
	[  +0.000814] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001079] FS-Cache: N-cookie d=0000000018ded0e0{9p.inode} n=0000000024f05116
	[  +0.001224] FS-Cache: N-key=[8] 'f33f5c0100000000'
	
	
	==> etcd [dfb60fcada59941f4e444aeba7e22916a56246382f9274a10de76b04ed630f8b] <==
	{"level":"info","ts":"2024-01-08T22:54:50.534804Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-08T22:54:50.535268Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-01-08T22:54:50.535402Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T22:54:50.535431Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T22:54:50.53544Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T22:54:50.535878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-08T22:54:50.535966Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-08T22:54:50.765025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T22:54:50.765134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T22:54:50.765188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-08T22:54:50.765235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T22:54:50.765268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T22:54:50.765303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T22:54:50.765339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-08T22:54:50.769213Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-265402 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T22:54:50.769418Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:54:50.770653Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-08T22:54:50.770791Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:54:50.7731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T22:54:50.774085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T22:54:50.774491Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T22:54:50.774556Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T22:54:50.785088Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:54:50.785214Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T22:54:50.785246Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 22:56:45 up  5:39,  0 users,  load average: 0.78, 1.65, 1.78
	Linux multinode-265402 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [c93266a6ec57736815a58904be518b9f57375ee1b31e3f5f9fc2d5d5ee141591] <==
	I0108 22:55:41.947115       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 22:55:41.947147       1 main.go:227] handling current node
	I0108 22:55:51.962876       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 22:55:51.962911       1 main.go:227] handling current node
	I0108 22:56:01.976013       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 22:56:01.976154       1 main.go:227] handling current node
	I0108 22:56:01.976203       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 22:56:01.976547       1 main.go:250] Node multinode-265402-m02 has CIDR [10.244.1.0/24] 
	I0108 22:56:01.976841       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0108 22:56:11.981336       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 22:56:11.981365       1 main.go:227] handling current node
	I0108 22:56:11.981376       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 22:56:11.981383       1 main.go:250] Node multinode-265402-m02 has CIDR [10.244.1.0/24] 
	I0108 22:56:21.992796       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 22:56:21.992827       1 main.go:227] handling current node
	I0108 22:56:21.992839       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 22:56:21.992844       1 main.go:250] Node multinode-265402-m02 has CIDR [10.244.1.0/24] 
	I0108 22:56:32.014263       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 22:56:32.014298       1 main.go:227] handling current node
	I0108 22:56:32.014315       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 22:56:32.014321       1 main.go:250] Node multinode-265402-m02 has CIDR [10.244.1.0/24] 
	I0108 22:56:42.033186       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0108 22:56:42.033218       1 main.go:227] handling current node
	I0108 22:56:42.033230       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0108 22:56:42.033236       1 main.go:250] Node multinode-265402-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e6725dd95b9ce03f87a10787660fc07d2cd2c9b66520e84f07833c80d0c9d732] <==
	I0108 22:54:54.840390       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 22:54:54.841582       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 22:54:54.848107       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 22:54:54.848273       1 aggregator.go:166] initial CRD sync complete...
	I0108 22:54:54.848308       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 22:54:54.848338       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 22:54:54.848374       1 cache.go:39] Caches are synced for autoregister controller
	E0108 22:54:54.852973       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0108 22:54:55.056226       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 22:54:55.539254       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 22:54:55.544111       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 22:54:55.544138       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 22:54:56.134853       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 22:54:56.182498       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 22:54:56.246205       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 22:54:56.258178       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0108 22:54:56.259668       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 22:54:56.265832       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 22:54:56.812666       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 22:54:57.988306       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 22:54:58.007267       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 22:54:58.022485       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 22:55:10.491664       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 22:55:10.656499       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0108 22:56:43.080459       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:58834->192.168.58.2:10250: write: broken pipe
	
	
	==> kube-controller-manager [7f9a6970d4f3798dd67b8e6569e5cdf76a9fdfa90288f32e29826496a07a3d3d] <==
	I0108 22:56:02.270612       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-dhbdf"
	I0108 22:56:02.289166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.733442ms"
	I0108 22:56:02.297860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.645184ms"
	I0108 22:56:02.297943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.163µs"
	I0108 22:56:04.722099       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-265402-m02"
	I0108 22:56:04.722161       1 event.go:307] "Event occurred" object="multinode-265402-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-265402-m02 event: Registered Node multinode-265402-m02 in Controller"
	I0108 22:56:07.381044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.661µs"
	I0108 22:56:08.374363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="185.23µs"
	I0108 22:56:08.388237       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.321µs"
	I0108 22:56:08.395345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.615µs"
	I0108 22:56:34.869265       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-265402-m02"
	I0108 22:56:37.579584       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 22:56:37.602808       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-kcr7b"
	I0108 22:56:37.621708       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-5qwgb"
	I0108 22:56:37.636521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.460918ms"
	I0108 22:56:37.644150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.571354ms"
	I0108 22:56:37.644320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="126.941µs"
	I0108 22:56:37.651542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.584µs"
	I0108 22:56:37.651856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.011µs"
	I0108 22:56:37.670819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="136.549µs"
	I0108 22:56:39.748095       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-kcr7b" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-kcr7b"
	I0108 22:56:39.776762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.983812ms"
	I0108 22:56:39.777085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="100.258µs"
	I0108 22:56:40.442884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.684308ms"
	I0108 22:56:40.443197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.187µs"
	
	
	==> kube-proxy [4e28d7bb1407ccc11f5f977c29eecb3e1a9192107c19d796e4a73a17f3326a2a] <==
	I0108 22:55:11.712189       1 server_others.go:69] "Using iptables proxy"
	I0108 22:55:11.731906       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0108 22:55:11.755672       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0108 22:55:11.757784       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:55:11.757872       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0108 22:55:11.757904       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0108 22:55:11.758025       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:55:11.758340       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:55:11.758547       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:55:11.759356       1 config.go:188] "Starting service config controller"
	I0108 22:55:11.759507       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:55:11.759726       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:55:11.760859       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:55:11.760383       1 config.go:315] "Starting node config controller"
	I0108 22:55:11.760872       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:55:11.861751       1 shared_informer.go:318] Caches are synced for node config
	I0108 22:55:11.861757       1 shared_informer.go:318] Caches are synced for service config
	I0108 22:55:11.861863       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0c8c908f402b2b92af3168dc55972e5c21650bab792aa01eacfa5140c74a8e3c] <==
	W0108 22:54:54.815911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:54:54.816614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:54:54.816005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:54:54.816706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:54:54.816064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:54:54.816785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:54:55.662969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:54:55.663093       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:54:55.764382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:54:55.764484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:54:55.792890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:54:55.793100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:54:55.801184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 22:54:55.801284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0108 22:54:55.805539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:54:55.805637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:54:55.843683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 22:54:55.843810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 22:54:55.850519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 22:54:55.850636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 22:54:55.886827       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:54:55.886943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:54:56.115386       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 22:54:56.115420       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 22:54:58.205567       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.280138    1394 topology_manager.go:215] "Topology Admit Handler" podUID="21e5850b-39b9-4ab2-b42f-056e41fc39e0" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.281576    1394 topology_manager.go:215] "Topology Admit Handler" podUID="e30c3eb4-3f0a-40da-b222-8987a1951271" podNamespace="kube-system" podName="coredns-5dd5756b68-jxvsh"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.290454    1394 topology_manager.go:215] "Topology Admit Handler" podUID="49fd4e2f-0617-4904-8f59-17192c16fa4f" podNamespace="kube-system" podName="coredns-5dd5756b68-dhbdf"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.369795    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49fd4e2f-0617-4904-8f59-17192c16fa4f-config-volume\") pod \"coredns-5dd5756b68-dhbdf\" (UID: \"49fd4e2f-0617-4904-8f59-17192c16fa4f\") " pod="kube-system/coredns-5dd5756b68-dhbdf"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.369866    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5wjn\" (UniqueName: \"kubernetes.io/projected/49fd4e2f-0617-4904-8f59-17192c16fa4f-kube-api-access-z5wjn\") pod \"coredns-5dd5756b68-dhbdf\" (UID: \"49fd4e2f-0617-4904-8f59-17192c16fa4f\") " pod="kube-system/coredns-5dd5756b68-dhbdf"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.369901    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e30c3eb4-3f0a-40da-b222-8987a1951271-config-volume\") pod \"coredns-5dd5756b68-jxvsh\" (UID: \"e30c3eb4-3f0a-40da-b222-8987a1951271\") " pod="kube-system/coredns-5dd5756b68-jxvsh"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.369928    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/21e5850b-39b9-4ab2-b42f-056e41fc39e0-tmp\") pod \"storage-provisioner\" (UID: \"21e5850b-39b9-4ab2-b42f-056e41fc39e0\") " pod="kube-system/storage-provisioner"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.369955    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx4gp\" (UniqueName: \"kubernetes.io/projected/21e5850b-39b9-4ab2-b42f-056e41fc39e0-kube-api-access-gx4gp\") pod \"storage-provisioner\" (UID: \"21e5850b-39b9-4ab2-b42f-056e41fc39e0\") " pod="kube-system/storage-provisioner"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: I0108 22:55:42.369980    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44ss\" (UniqueName: \"kubernetes.io/projected/e30c3eb4-3f0a-40da-b222-8987a1951271-kube-api-access-w44ss\") pod \"coredns-5dd5756b68-jxvsh\" (UID: \"e30c3eb4-3f0a-40da-b222-8987a1951271\") " pod="kube-system/coredns-5dd5756b68-jxvsh"
	Jan 08 22:55:42 multinode-265402 kubelet[1394]: W0108 22:55:42.656863    1394 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7/crio-d372ae7acb22ecef2b3648d1939e33dff1344908b50c959661e7d3e0de6647ca WatchSource:0}: Error finding container d372ae7acb22ecef2b3648d1939e33dff1344908b50c959661e7d3e0de6647ca: Status 404 returned error can't find the container with id d372ae7acb22ecef2b3648d1939e33dff1344908b50c959661e7d3e0de6647ca
	Jan 08 22:55:43 multinode-265402 kubelet[1394]: I0108 22:55:43.331656    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.331613009 podCreationTimestamp="2024-01-08 22:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 22:55:43.319790238 +0000 UTC m=+45.360188104" watchObservedRunningTime="2024-01-08 22:55:43.331613009 +0000 UTC m=+45.372010867"
	Jan 08 22:55:43 multinode-265402 kubelet[1394]: I0108 22:55:43.361420    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jxvsh" podStartSLOduration=33.361374046 podCreationTimestamp="2024-01-08 22:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 22:55:43.331993084 +0000 UTC m=+45.372390942" watchObservedRunningTime="2024-01-08 22:55:43.361374046 +0000 UTC m=+45.401771904"
	Jan 08 22:55:43 multinode-265402 kubelet[1394]: I0108 22:55:43.361516    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-dhbdf" podStartSLOduration=33.361498353 podCreationTimestamp="2024-01-08 22:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 22:55:43.360679446 +0000 UTC m=+45.401077304" watchObservedRunningTime="2024-01-08 22:55:43.361498353 +0000 UTC m=+45.401896252"
	Jan 08 22:56:07 multinode-265402 kubelet[1394]: I0108 22:56:07.438606    1394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5wjn\" (UniqueName: \"kubernetes.io/projected/49fd4e2f-0617-4904-8f59-17192c16fa4f-kube-api-access-z5wjn\") pod \"49fd4e2f-0617-4904-8f59-17192c16fa4f\" (UID: \"49fd4e2f-0617-4904-8f59-17192c16fa4f\") "
	Jan 08 22:56:07 multinode-265402 kubelet[1394]: I0108 22:56:07.438664    1394 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49fd4e2f-0617-4904-8f59-17192c16fa4f-config-volume\") pod \"49fd4e2f-0617-4904-8f59-17192c16fa4f\" (UID: \"49fd4e2f-0617-4904-8f59-17192c16fa4f\") "
	Jan 08 22:56:07 multinode-265402 kubelet[1394]: I0108 22:56:07.439045    1394 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49fd4e2f-0617-4904-8f59-17192c16fa4f-config-volume" (OuterVolumeSpecName: "config-volume") pod "49fd4e2f-0617-4904-8f59-17192c16fa4f" (UID: "49fd4e2f-0617-4904-8f59-17192c16fa4f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jan 08 22:56:07 multinode-265402 kubelet[1394]: I0108 22:56:07.442899    1394 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fd4e2f-0617-4904-8f59-17192c16fa4f-kube-api-access-z5wjn" (OuterVolumeSpecName: "kube-api-access-z5wjn") pod "49fd4e2f-0617-4904-8f59-17192c16fa4f" (UID: "49fd4e2f-0617-4904-8f59-17192c16fa4f"). InnerVolumeSpecName "kube-api-access-z5wjn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:56:07 multinode-265402 kubelet[1394]: I0108 22:56:07.539494    1394 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z5wjn\" (UniqueName: \"kubernetes.io/projected/49fd4e2f-0617-4904-8f59-17192c16fa4f-kube-api-access-z5wjn\") on node \"multinode-265402\" DevicePath \"\""
	Jan 08 22:56:07 multinode-265402 kubelet[1394]: I0108 22:56:07.539536    1394 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49fd4e2f-0617-4904-8f59-17192c16fa4f-config-volume\") on node \"multinode-265402\" DevicePath \"\""
	Jan 08 22:56:08 multinode-265402 kubelet[1394]: I0108 22:56:08.358992    1394 scope.go:117] "RemoveContainer" containerID="83e9145c4665858dac2c7c34afdc3a3b78924f2ab1a98d4ed93def1a223c4b01"
	Jan 08 22:56:10 multinode-265402 kubelet[1394]: I0108 22:56:10.157754    1394 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="49fd4e2f-0617-4904-8f59-17192c16fa4f" path="/var/lib/kubelet/pods/49fd4e2f-0617-4904-8f59-17192c16fa4f/volumes"
	Jan 08 22:56:37 multinode-265402 kubelet[1394]: I0108 22:56:37.650199    1394 topology_manager.go:215] "Topology Admit Handler" podUID="c0d04042-f2d0-47cb-96d0-f4a15760e907" podNamespace="default" podName="busybox-5bc68d56bd-5qwgb"
	Jan 08 22:56:37 multinode-265402 kubelet[1394]: E0108 22:56:37.650281    1394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49fd4e2f-0617-4904-8f59-17192c16fa4f" containerName="coredns"
	Jan 08 22:56:37 multinode-265402 kubelet[1394]: I0108 22:56:37.650316    1394 memory_manager.go:346] "RemoveStaleState removing state" podUID="49fd4e2f-0617-4904-8f59-17192c16fa4f" containerName="coredns"
	Jan 08 22:56:37 multinode-265402 kubelet[1394]: I0108 22:56:37.737021    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99pvx\" (UniqueName: \"kubernetes.io/projected/c0d04042-f2d0-47cb-96d0-f4a15760e907-kube-api-access-99pvx\") pod \"busybox-5bc68d56bd-5qwgb\" (UID: \"c0d04042-f2d0-47cb-96d0-f4a15760e907\") " pod="default/busybox-5bc68d56bd-5qwgb"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-265402 -n multinode-265402
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-265402 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.298545608.exe start -p running-upgrade-325440 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.298545608.exe start -p running-upgrade-325440 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m6.46647197s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-325440 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-325440 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.632432594s)

                                                
                                                
-- stdout --
	* [running-upgrade-325440] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-325440 in cluster running-upgrade-325440
	* Pulling base image v0.0.42-1703790982-17866 ...
	* Updating the running docker "running-upgrade-325440" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:13:12.111008 1276194 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:13:12.111233 1276194 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:13:12.111258 1276194 out.go:309] Setting ErrFile to fd 2...
	I0108 23:13:12.111277 1276194 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:13:12.111585 1276194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 23:13:12.112033 1276194 out.go:303] Setting JSON to false
	I0108 23:13:12.113115 1276194 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21333,"bootTime":1704734260,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 23:13:12.113226 1276194 start.go:138] virtualization:  
	I0108 23:13:12.116001 1276194 out.go:177] * [running-upgrade-325440] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 23:13:12.118641 1276194 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 23:13:12.120703 1276194 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:13:12.118784 1276194 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0108 23:13:12.118826 1276194 notify.go:220] Checking for updates...
	I0108 23:13:12.122788 1276194 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 23:13:12.124795 1276194 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 23:13:12.126973 1276194 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 23:13:12.128866 1276194 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:13:12.131083 1276194 config.go:182] Loaded profile config "running-upgrade-325440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 23:13:12.133955 1276194 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:13:12.136020 1276194 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:13:12.162726 1276194 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:13:12.162845 1276194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:13:12.286473 1276194 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 23:13:12.275696338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 23:13:12.286588 1276194 docker.go:295] overlay module found
	I0108 23:13:12.289549 1276194 out.go:177] * Using the docker driver based on existing profile
	I0108 23:13:12.286746 1276194 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0108 23:13:12.291485 1276194 start.go:298] selected driver: docker
	I0108 23:13:12.291511 1276194 start.go:902] validating driver "docker" against &{Name:running-upgrade-325440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-325440 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.65 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 23:13:12.291598 1276194 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:13:12.292260 1276194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:13:12.362477 1276194 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2024-01-08 23:13:12.351362452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 23:13:12.362846 1276194 cni.go:84] Creating CNI manager for ""
	I0108 23:13:12.362876 1276194 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 23:13:12.362891 1276194 start_flags.go:321] config:
	{Name:running-upgrade-325440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-325440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.65 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 23:13:12.366809 1276194 out.go:177] * Starting control plane node running-upgrade-325440 in cluster running-upgrade-325440
	I0108 23:13:12.369201 1276194 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:13:12.371275 1276194 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 23:13:12.373255 1276194 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0108 23:13:12.373342 1276194 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0108 23:13:12.392601 1276194 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0108 23:13:12.392630 1276194 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0108 23:13:12.438599 1276194 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0108 23:13:12.438753 1276194 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/running-upgrade-325440/config.json ...
	I0108 23:13:12.439003 1276194 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:13:12.439062 1276194 start.go:365] acquiring machines lock for running-upgrade-325440: {Name:mk1632618100030c9f87fff44a588436ce0b8b09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.439119 1276194 start.go:369] acquired machines lock for "running-upgrade-325440" in 32.09µs
	I0108 23:13:12.439138 1276194 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:13:12.439144 1276194 fix.go:54] fixHost starting: 
	I0108 23:13:12.439424 1276194 cli_runner.go:164] Run: docker container inspect running-upgrade-325440 --format={{.State.Status}}
	I0108 23:13:12.439680 1276194 cache.go:107] acquiring lock: {Name:mk8191f0751ad02e7e922e2c4bc53476595b89dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.439743 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 23:13:12.439751 1276194 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 74.797µs
	I0108 23:13:12.439760 1276194 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 23:13:12.439772 1276194 cache.go:107] acquiring lock: {Name:mka155ef0c2f150d02867e6959ed64f328602b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.439814 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0108 23:13:12.439819 1276194 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 50.805µs
	I0108 23:13:12.439825 1276194 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0108 23:13:12.439834 1276194 cache.go:107] acquiring lock: {Name:mk00e10018b13da342cf356705aa85e29e811147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.439859 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0108 23:13:12.439863 1276194 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 30.737µs
	I0108 23:13:12.439870 1276194 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0108 23:13:12.439878 1276194 cache.go:107] acquiring lock: {Name:mk04fd55195cd30eea8ba943bf4f2b6ca5847e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.439905 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0108 23:13:12.439909 1276194 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 31.779µs
	I0108 23:13:12.439915 1276194 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0108 23:13:12.439923 1276194 cache.go:107] acquiring lock: {Name:mkbbd3fad7f43423e845544e96fe50e3a1721f24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.439950 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0108 23:13:12.439954 1276194 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 32.172µs
	I0108 23:13:12.439960 1276194 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0108 23:13:12.439968 1276194 cache.go:107] acquiring lock: {Name:mkb97803584c6f0bc4bac659ac9219ac35c71b21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.439992 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0108 23:13:12.439999 1276194 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.087µs
	I0108 23:13:12.440006 1276194 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0108 23:13:12.440014 1276194 cache.go:107] acquiring lock: {Name:mkefdf5dfdfab5ebeea425b5a8a6f5b068bd39b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.440053 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0108 23:13:12.440057 1276194 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 44.505µs
	I0108 23:13:12.440063 1276194 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0108 23:13:12.440071 1276194 cache.go:107] acquiring lock: {Name:mk19e26006614dbf428ed8e1de132395c9848246 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:13:12.440095 1276194 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0108 23:13:12.440099 1276194 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 29.3µs
	I0108 23:13:12.440105 1276194 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0108 23:13:12.440111 1276194 cache.go:87] Successfully saved all images to host disk.
	I0108 23:13:12.458173 1276194 fix.go:102] recreateIfNeeded on running-upgrade-325440: state=Running err=<nil>
	W0108 23:13:12.458205 1276194 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:13:12.462441 1276194 out.go:177] * Updating the running docker "running-upgrade-325440" container ...
	I0108 23:13:12.464308 1276194 machine.go:88] provisioning docker machine ...
	I0108 23:13:12.464334 1276194 ubuntu.go:169] provisioning hostname "running-upgrade-325440"
	I0108 23:13:12.464410 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:12.483280 1276194 main.go:141] libmachine: Using SSH client type: native
	I0108 23:13:12.483712 1276194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34219 <nil> <nil>}
	I0108 23:13:12.483730 1276194 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-325440 && echo "running-upgrade-325440" | sudo tee /etc/hostname
	I0108 23:13:12.653837 1276194 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-325440
	
	I0108 23:13:12.653918 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:12.673228 1276194 main.go:141] libmachine: Using SSH client type: native
	I0108 23:13:12.673637 1276194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34219 <nil> <nil>}
	I0108 23:13:12.673663 1276194 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-325440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-325440/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-325440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:13:12.819043 1276194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:13:12.819071 1276194 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 23:13:12.819106 1276194 ubuntu.go:177] setting up certificates
	I0108 23:13:12.819120 1276194 provision.go:83] configureAuth start
	I0108 23:13:12.819191 1276194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-325440
	I0108 23:13:12.838834 1276194 provision.go:138] copyHostCerts
	I0108 23:13:12.838927 1276194 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem, removing ...
	I0108 23:13:12.838955 1276194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 23:13:12.839041 1276194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 23:13:12.839138 1276194 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem, removing ...
	I0108 23:13:12.839151 1276194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 23:13:12.839179 1276194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 23:13:12.839228 1276194 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem, removing ...
	I0108 23:13:12.839236 1276194 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 23:13:12.839260 1276194 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 23:13:12.839303 1276194 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-325440 san=[192.168.70.65 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-325440]
	I0108 23:13:13.093770 1276194 provision.go:172] copyRemoteCerts
	I0108 23:13:13.093884 1276194 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:13:13.093944 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:13.113432 1276194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34219 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/running-upgrade-325440/id_rsa Username:docker}
	I0108 23:13:13.219681 1276194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 23:13:13.246307 1276194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:13:13.277161 1276194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:13:13.317712 1276194 provision.go:86] duration metric: configureAuth took 498.571184ms
	I0108 23:13:13.317737 1276194 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:13:13.317917 1276194 config.go:182] Loaded profile config "running-upgrade-325440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 23:13:13.318028 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:13.343245 1276194 main.go:141] libmachine: Using SSH client type: native
	I0108 23:13:13.343745 1276194 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34219 <nil> <nil>}
	I0108 23:13:13.343800 1276194 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:13:14.131630 1276194 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:13:14.131653 1276194 machine.go:91] provisioned docker machine in 1.667329055s
	I0108 23:13:14.131663 1276194 start.go:300] post-start starting for "running-upgrade-325440" (driver="docker")
	I0108 23:13:14.131675 1276194 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:13:14.131753 1276194 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:13:14.131791 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:14.157836 1276194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34219 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/running-upgrade-325440/id_rsa Username:docker}
	I0108 23:13:14.260190 1276194 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:13:14.264753 1276194 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:13:14.264782 1276194 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:13:14.264794 1276194 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:13:14.264802 1276194 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 23:13:14.264813 1276194 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 23:13:14.264873 1276194 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 23:13:14.264961 1276194 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> 11522512.pem in /etc/ssl/certs
	I0108 23:13:14.265094 1276194 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:13:14.277582 1276194 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 23:13:14.318000 1276194 start.go:303] post-start completed in 186.32159ms
	I0108 23:13:14.318083 1276194 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:13:14.318128 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:14.348571 1276194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34219 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/running-upgrade-325440/id_rsa Username:docker}
	I0108 23:13:14.448698 1276194 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:13:14.454870 1276194 fix.go:56] fixHost completed within 2.015717108s
	I0108 23:13:14.454896 1276194 start.go:83] releasing machines lock for "running-upgrade-325440", held for 2.015764057s
	I0108 23:13:14.454968 1276194 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-325440
	I0108 23:13:14.473587 1276194 ssh_runner.go:195] Run: cat /version.json
	I0108 23:13:14.473655 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:14.473943 1276194 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:13:14.473987 1276194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-325440
	I0108 23:13:14.495255 1276194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34219 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/running-upgrade-325440/id_rsa Username:docker}
	I0108 23:13:14.497129 1276194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34219 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/running-upgrade-325440/id_rsa Username:docker}
	W0108 23:13:14.593844 1276194 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 23:13:14.593955 1276194 ssh_runner.go:195] Run: systemctl --version
	I0108 23:13:14.738155 1276194 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:13:14.857476 1276194 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:13:14.864868 1276194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:13:14.886986 1276194 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:13:14.887064 1276194 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:13:14.918185 1276194 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:13:14.918211 1276194 start.go:475] detecting cgroup driver to use...
	I0108 23:13:14.918263 1276194 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:13:14.918354 1276194 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:13:14.951540 1276194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:13:14.964601 1276194 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:13:14.964723 1276194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:13:14.979617 1276194 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:13:14.992633 1276194 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 23:13:15.010544 1276194 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 23:13:15.010674 1276194 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:13:15.200935 1276194 docker.go:219] disabling docker service ...
	I0108 23:13:15.201079 1276194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:13:15.233326 1276194 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:13:15.260248 1276194 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:13:15.460703 1276194 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:13:15.613813 1276194 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:13:15.627138 1276194 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:13:15.644681 1276194 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 23:13:15.644746 1276194 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:13:15.658623 1276194 out.go:177] 
	W0108 23:13:15.660897 1276194 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 23:13:15.660973 1276194 out.go:239] * 
	* 
	W0108 23:13:15.663906 1276194 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 23:13:15.666190 1276194 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-325440 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 23:13:15.691724267 +0000 UTC m=+2596.381457589
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-325440
helpers_test.go:235: (dbg) docker inspect running-upgrade-325440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "019b12544405223374451f9f625c44fb3745adcb134195f6382f6f5a1a670588",
	        "Created": "2024-01-08T23:12:27.551147259Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1272718,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T23:12:28.010884067Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/019b12544405223374451f9f625c44fb3745adcb134195f6382f6f5a1a670588/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/019b12544405223374451f9f625c44fb3745adcb134195f6382f6f5a1a670588/hostname",
	        "HostsPath": "/var/lib/docker/containers/019b12544405223374451f9f625c44fb3745adcb134195f6382f6f5a1a670588/hosts",
	        "LogPath": "/var/lib/docker/containers/019b12544405223374451f9f625c44fb3745adcb134195f6382f6f5a1a670588/019b12544405223374451f9f625c44fb3745adcb134195f6382f6f5a1a670588-json.log",
	        "Name": "/running-upgrade-325440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-325440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-325440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ea1d477da28d2a1c3e2de305693786f401b0ba81f09e1df725fd7dcfb5fe26de-init/diff:/var/lib/docker/overlay2/5c565fc02f9fc40c36ada1ad701c8076425372509806915a92cc340352b69598/diff:/var/lib/docker/overlay2/62923341211a0ffb823a22f5cb316e2b41ce33c8ee3c679711b8b4ef50b443cd/diff:/var/lib/docker/overlay2/7d7def56b31570d417881d96f38efb052116ac136b42e66e78b86a24a9de91d0/diff:/var/lib/docker/overlay2/cf6ae910835fa35636f6fdaa77d4b526b149c380c6ae257a743d5633375f5a2f/diff:/var/lib/docker/overlay2/0a8cad2c2e796e83db446abd8c6c8bbdb2d6d058cd92ca6f7ea41fe66f87e9b5/diff:/var/lib/docker/overlay2/a9f8a82632ea47985a3d2dd8ea68dbe3fbedc8e98e622d905cf29cdb2b5b581b/diff:/var/lib/docker/overlay2/b0822bf91412bf12ab531013c46d86a93a37d94c43bb082471afd4408a77d7da/diff:/var/lib/docker/overlay2/08f55230bc3b633a870f74efea12b5a9eedf949f196498bf9ddc3d7640906804/diff:/var/lib/docker/overlay2/29a66ec5a107552529aad375f0551cd4739bf03f58e06d03fb01aa24193450ec/diff:/var/lib/docker/overlay2/94828e
4f087a4f7baba148a2ab51a2ba8232ce5b35c423c80d089fbf25e650fc/diff:/var/lib/docker/overlay2/887c25e061b0dcc8dc91d03e2a59637cda0dc141ba5341d1a35ddb862645f125/diff:/var/lib/docker/overlay2/c12a29144b35c4a2a49cf24f892d23b96b261a741f040039dd40f2a4b1eea9f1/diff:/var/lib/docker/overlay2/03194a55188dbd1dbec7507af6a8ad1c6255caddde04668d4973ef631e153013/diff:/var/lib/docker/overlay2/c24fed96036fd6ad1c9ef2006b8c1fb0264d11cbd2534168fb997e62b54862fd/diff:/var/lib/docker/overlay2/81403548d9729e58f47eafefa1a76e756fb8fe4fef6a7079243a1cb629d6aaf1/diff:/var/lib/docker/overlay2/91f5831ecde7e6f647770b87a847111fe6316d69fc25b8a8d997ce59bcb03ea5/diff:/var/lib/docker/overlay2/522d7e88c7a20650576d371267682f6ba43692ed7479f1062ac02edbd85255d1/diff:/var/lib/docker/overlay2/a76f486fa06861aeaad977dbc40216f264c5f720f15b954853e5670b618b206b/diff:/var/lib/docker/overlay2/d0de17c81bdfc126f0c188f293f419ac2a7ab6dce74d2a82a7a7d9a1b3a4f009/diff:/var/lib/docker/overlay2/0fd5f428434d838bc141c351f6600799db846c4c33e0d388f7e35332f21a356b/diff:/var/lib/d
ocker/overlay2/a73bebd19e3f4adbd3811f63922df842f5f1210c86e172c8caaa9af2f2a32a63/diff:/var/lib/docker/overlay2/e5d5f1ac171cd174e449d658924e67675cc000fc8c927436210d66eea31e910d/diff:/var/lib/docker/overlay2/7c5c94c45c769c813ea1d29ac53252b23ea684ace7af95a7f23732337674827d/diff:/var/lib/docker/overlay2/da60a72395c30fcf1ecd1f5fc3ca877af195809fca5f3ca2ff4a17ffeb3f6e32/diff:/var/lib/docker/overlay2/30b902eb5134212cda769790531bfca99dc66098321ab87667fed0963e0fe3b6/diff:/var/lib/docker/overlay2/6692de39eab503d562e2badecdc60ef07bdb69765cb1612d56f1442bd68b3f7f/diff:/var/lib/docker/overlay2/4dcdb499992192b4ff3f25b758a73f5ae6f372c9e160be5a7a34dfd363f6f5e5/diff:/var/lib/docker/overlay2/968aae618b3206477847b36861043c4bb7aa0503e2299a3a4cc2335300c5fb53/diff:/var/lib/docker/overlay2/de4f3466257af864f22fd8adbef1f55e614dda8b91604558a2159f6ee80ec664/diff:/var/lib/docker/overlay2/94c9c9c34023e111d5ffb5ef1f55df4cc66ece4da10d4205e7751228e70dd1da/diff:/var/lib/docker/overlay2/214b19dd7d0b160cbc273399970ca799d4da737a5212973bdf91d146279
d19da/diff:/var/lib/docker/overlay2/3329a4c97678a271c036380de92dc5ab6721d7e50bef2ee92d5af366b3701f11/diff:/var/lib/docker/overlay2/bb223665fff33a0360fe4f01de1f73111cc086af47d4587238149ebbc7a641f3/diff:/var/lib/docker/overlay2/7b7a9366d1fb25096ca960b2e98505c460718258a9f962e552a4ddf594900d57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ea1d477da28d2a1c3e2de305693786f401b0ba81f09e1df725fd7dcfb5fe26de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ea1d477da28d2a1c3e2de305693786f401b0ba81f09e1df725fd7dcfb5fe26de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ea1d477da28d2a1c3e2de305693786f401b0ba81f09e1df725fd7dcfb5fe26de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-325440",
	                "Source": "/var/lib/docker/volumes/running-upgrade-325440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-325440",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-325440",
	                "name.minikube.sigs.k8s.io": "running-upgrade-325440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53ab84ac1f6fd640919d16aa4014918d2040df7d43d56447a652c9d1530a7180",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/53ab84ac1f6f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-325440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.65"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "019b12544405",
	                        "running-upgrade-325440"
	                    ],
	                    "NetworkID": "7d2c6bdf8ccb05971cab7bcc3be0ed535aecb6a9c11573968f4774bc82962679",
	                    "EndpointID": "9ef5aa64dfb958926fbe47ba1eb8df45763788fe3add4c0da1ae9025e8c105bb",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.65",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:41",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-325440 -n running-upgrade-325440
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-325440 -n running-upgrade-325440: exit status 4 (455.447822ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:13:16.085203 1276883 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-325440" does not appear in /home/jenkins/minikube-integration/17866-1146913/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-325440" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-325440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-325440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-325440: (3.186063046s)
--- FAIL: TestRunningBinaryUpgrade (75.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.38s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.3372260245.exe start -p missing-upgrade-955462 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.3372260245.exe start -p missing-upgrade-955462 --memory=2200 --driver=docker  --container-runtime=crio: (2m12.120536644s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-955462
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-955462: (1.830091706s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-955462
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-955462 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-955462 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (36.749820963s)

                                                
                                                
-- stdout --
	* [missing-upgrade-955462] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-955462 in cluster missing-upgrade-955462
	* Pulling base image v0.0.42-1703790982-17866 ...
	* docker "missing-upgrade-955462" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:09:58.290839 1262907 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:09:58.291008 1262907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:09:58.291016 1262907 out.go:309] Setting ErrFile to fd 2...
	I0108 23:09:58.291022 1262907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:09:58.291305 1262907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 23:09:58.292058 1262907 out.go:303] Setting JSON to false
	I0108 23:09:58.293131 1262907 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21139,"bootTime":1704734260,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 23:09:58.293211 1262907 start.go:138] virtualization:  
	I0108 23:09:58.297158 1262907 out.go:177] * [missing-upgrade-955462] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 23:09:58.299800 1262907 notify.go:220] Checking for updates...
	I0108 23:09:58.300444 1262907 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 23:09:58.302196 1262907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:09:58.304536 1262907 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 23:09:58.306286 1262907 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 23:09:58.308227 1262907 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 23:09:58.310043 1262907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:09:58.312236 1262907 config.go:182] Loaded profile config "missing-upgrade-955462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 23:09:58.314521 1262907 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:09:58.316200 1262907 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:09:58.340609 1262907 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:09:58.340739 1262907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:09:58.423038 1262907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2024-01-08 23:09:58.412311192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 23:09:58.423147 1262907 docker.go:295] overlay module found
	I0108 23:09:58.425482 1262907 out.go:177] * Using the docker driver based on existing profile
	I0108 23:09:58.427513 1262907 start.go:298] selected driver: docker
	I0108 23:09:58.427530 1262907 start.go:902] validating driver "docker" against &{Name:missing-upgrade-955462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-955462 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.71 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 23:09:58.427637 1262907 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:09:58.428282 1262907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:09:58.501309 1262907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2024-01-08 23:09:58.491701857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 23:09:58.501681 1262907 cni.go:84] Creating CNI manager for ""
	I0108 23:09:58.501714 1262907 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 23:09:58.501733 1262907 start_flags.go:321] config:
	{Name:missing-upgrade-955462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-955462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.71 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 23:09:58.504188 1262907 out.go:177] * Starting control plane node missing-upgrade-955462 in cluster missing-upgrade-955462
	I0108 23:09:58.505868 1262907 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:09:58.507866 1262907 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 23:09:58.509454 1262907 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0108 23:09:58.509486 1262907 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0108 23:09:58.527190 1262907 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0108 23:09:58.527375 1262907 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0108 23:09:58.528174 1262907 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0108 23:09:58.574699 1262907 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0108 23:09:58.574874 1262907 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/missing-upgrade-955462/config.json ...
	I0108 23:09:58.575006 1262907 cache.go:107] acquiring lock: {Name:mk8191f0751ad02e7e922e2c4bc53476595b89dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575092 1262907 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 23:09:58.575100 1262907 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.283µs
	I0108 23:09:58.575110 1262907 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 23:09:58.575119 1262907 cache.go:107] acquiring lock: {Name:mka155ef0c2f150d02867e6959ed64f328602b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575151 1262907 cache.go:107] acquiring lock: {Name:mkbbd3fad7f43423e845544e96fe50e3a1721f24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575226 1262907 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0108 23:09:58.575241 1262907 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0108 23:09:58.575371 1262907 cache.go:107] acquiring lock: {Name:mkb97803584c6f0bc4bac659ac9219ac35c71b21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575390 1262907 cache.go:107] acquiring lock: {Name:mk04fd55195cd30eea8ba943bf4f2b6ca5847e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575444 1262907 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 23:09:58.575459 1262907 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0108 23:09:58.575503 1262907 cache.go:107] acquiring lock: {Name:mkefdf5dfdfab5ebeea425b5a8a6f5b068bd39b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575560 1262907 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0108 23:09:58.575608 1262907 cache.go:107] acquiring lock: {Name:mk19e26006614dbf428ed8e1de132395c9848246 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575373 1262907 cache.go:107] acquiring lock: {Name:mk00e10018b13da342cf356705aa85e29e811147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:09:58.575674 1262907 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0108 23:09:58.575700 1262907 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0108 23:09:58.576883 1262907 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0108 23:09:58.577337 1262907 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0108 23:09:58.577501 1262907 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0108 23:09:58.577764 1262907 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0108 23:09:58.577769 1262907 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0108 23:09:58.577939 1262907 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 23:09:58.578177 1262907 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	W0108 23:09:58.907973 1262907 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0108 23:09:58.908071 1262907 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W0108 23:09:58.948720 1262907 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0108 23:09:58.948799 1262907 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0108 23:09:58.974579 1262907 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0108 23:09:58.978347 1262907 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0108 23:09:58.982470 1262907 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I0108 23:09:59.002863 1262907 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0108 23:09:59.055458 1262907 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0108 23:09:59.055527 1262907 cache.go:162] opening:  /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0108 23:09:59.093037 1262907 cache.go:157] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0108 23:09:59.093066 1262907 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 517.693182ms
	I0108 23:09:59.093079 1262907 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  705.29 KiB / 287.99 MiB [] 0.24% ? p/s ?I0108 23:09:59.418242 1262907 cache.go:157] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0108 23:09:59.418274 1262907 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 842.667037ms
	I0108 23:09:59.418288 1262907 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0108 23:09:59.427790 1262907 cache.go:157] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0108 23:09:59.427872 1262907 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 852.479425ms
	I0108 23:09:59.427901 1262907 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0108 23:09:59.475777 1262907 cache.go:157] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0108 23:09:59.475852 1262907 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 900.730965ms
	I0108 23:09:59.475984 1262907 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  13.31 MiB / 287.99 MiB [>] 4.62% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.72 MiB I0108 23:09:59.924194 1262907 cache.go:157] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0108 23:09:59.924226 1262907 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.348853173s
	I0108 23:09:59.924240 1262907 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.72 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 42.72 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 39.97 MiB I0108 23:10:00.485798 1262907 cache.go:157] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0108 23:10:00.485840 1262907 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.910692274s
	I0108 23:10:00.485855 1262907 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  30.46 MiB / 287.99 MiB  10.58% 39.97 MiB    > gcr.io/k8s-minikube/kicbase...:  39.24 MiB / 287.99 MiB  13.62% 39.97 MiB    > gcr.io/k8s-minikube/kicbase...:  44.04 MiB / 287.99 MiB  15.29% 39.39 MiB    > gcr.io/k8s-minikube/kicbase...:  60.29 MiB / 287.99 MiB  20.94% 39.39 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 39.39 MiBI0108 23:10:01.349301 1262907 cache.go:157] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0108 23:10:01.349329 1262907 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 2.773824992s
	I0108 23:10:01.349343 1262907 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0108 23:10:01.349384 1262907 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  74.45 MiB / 287.99 MiB  25.85% 40.12 MiB    > gcr.io/k8s-minikube/kicbase...:  86.61 MiB / 287.99 MiB  30.07% 40.12 MiB    > gcr.io/k8s-minikube/kicbase...:  94.29 MiB / 287.99 MiB  32.74% 40.12 MiB    > gcr.io/k8s-minikube/kicbase...:  103.31 MiB / 287.99 MiB  35.87% 40.64 Mi    > gcr.io/k8s-minikube/kicbase...:  115.79 MiB / 287.99 MiB  40.21% 40.64 Mi    > gcr.io/k8s-minikube/kicbase...:  131.68 MiB / 287.99 MiB  45.73% 40.64 Mi    > gcr.io/k8s-minikube/kicbase...:  144.29 MiB / 287.99 MiB  50.10% 42.42 Mi    > gcr.io/k8s-minikube/kicbase...:  153.07 MiB / 287.99 MiB  53.15% 42.42 Mi    > gcr.io/k8s-minikube/kicbase...:  169.76 MiB / 287.99 MiB  58.95% 42.42 Mi    > gcr.io/k8s-minikube/kicbase...:  171.80 MiB / 287.99 MiB  59.66% 42.64 Mi    > gcr.io/k8s-minikube/kicbase...:  188.29 MiB / 287.99 MiB  65.38% 42.64 Mi    > gcr.io/k8s-minikube/kicbase...:  205.38 MiB / 287.99 MiB  71.32% 42.64 Mi    > gcr.io/k8s-minikube/kicbase...:  209.76 MiB / 287.99 MiB  72.
84% 43.97 Mi    > gcr.io/k8s-minikube/kicbase...:  222.68 MiB / 287.99 MiB  77.32% 43.97 Mi    > gcr.io/k8s-minikube/kicbase...:  237.73 MiB / 287.99 MiB  82.55% 43.97 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 44.18 Mi    > gcr.io/k8s-minikube/kicbase...:  240.34 MiB / 287.99 MiB  83.45% 44.18 Mi    > gcr.io/k8s-minikube/kicbase...:  258.04 MiB / 287.99 MiB  89.60% 44.18 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 44.23 Mi    > gcr.io/k8s-minikube/kicbase...:  272.77 MiB / 287.99 MiB  94.72% 44.23 Mi    > gcr.io/k8s-minikube/kicbase...:  283.98 MiB / 287.99 MiB  98.61% 44.23 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 43.84 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 43.84 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 43.84 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.01 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB
99.99% 41.01 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.01 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 38.37 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 36.04 MI0108 23:10:07.132465 1262907 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0108 23:10:07.132475 1262907 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0108 23:10:07.374562 1262907 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0108 23:10:07.374599 1262907 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:10:07.374653 1262907 start.go:365] acquiring machines lock for missing-upgrade-955462: {Name:mkdcb75aa25523374c2776082e9d273826f14786 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:10:07.374714 1262907 start.go:369] acquired machines lock for "missing-upgrade-955462" in 42.264µs
	I0108 23:10:07.374734 1262907 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:10:07.374741 1262907 fix.go:54] fixHost starting: 
	I0108 23:10:07.375019 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:07.405921 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:07.405984 1262907 fix.go:102] recreateIfNeeded on missing-upgrade-955462: state= err=unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:07.406025 1262907 fix.go:107] machineExists: false. err=machine does not exist
	I0108 23:10:07.421988 1262907 out.go:177] * docker "missing-upgrade-955462" container is missing, will recreate.
	I0108 23:10:07.434029 1262907 delete.go:124] DEMOLISHING missing-upgrade-955462 ...
	I0108 23:10:07.434154 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:07.454521 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	W0108 23:10:07.454596 1262907 stop.go:75] unable to get state: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:07.454612 1262907 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:07.455202 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:07.474148 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:07.474212 1262907 delete.go:82] Unable to get host status for missing-upgrade-955462, assuming it has already been deleted: state: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:07.474288 1262907 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-955462
	W0108 23:10:07.497578 1262907 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-955462 returned with exit code 1
	I0108 23:10:07.497611 1262907 kic.go:371] could not find the container missing-upgrade-955462 to remove it. will try anyways
	I0108 23:10:07.497662 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:07.522404 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	W0108 23:10:07.522464 1262907 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:07.522542 1262907 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-955462 /bin/bash -c "sudo init 0"
	W0108 23:10:07.544488 1262907 cli_runner.go:211] docker exec --privileged -t missing-upgrade-955462 /bin/bash -c "sudo init 0" returned with exit code 1
	I0108 23:10:07.544540 1262907 oci.go:650] error shutdown missing-upgrade-955462: docker exec --privileged -t missing-upgrade-955462 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:08.546977 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:08.581798 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:08.581854 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:08.581864 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:08.581893 1262907 retry.go:31] will retry after 342.252118ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:08.924366 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:08.942753 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:08.942815 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:08.942828 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:08.942856 1262907 retry.go:31] will retry after 622.743524ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:09.566718 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:09.607626 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:09.607693 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:09.607707 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:09.607742 1262907 retry.go:31] will retry after 696.657299ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:10.305200 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:10.330831 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:10.330878 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:10.330887 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:10.330912 1262907 retry.go:31] will retry after 1.884579856s: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:12.215745 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:12.232626 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:12.232683 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:12.232692 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:12.232723 1262907 retry.go:31] will retry after 2.599916374s: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:14.833125 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:14.856174 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:14.856240 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:14.856250 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:14.856279 1262907 retry.go:31] will retry after 5.353019601s: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:20.209527 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:20.235064 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:20.235126 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:20.235136 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:20.235173 1262907 retry.go:31] will retry after 7.968126461s: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:28.205120 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:28.241289 1262907 cli_runner.go:211] docker container inspect missing-upgrade-955462 --format={{.State.Status}} returned with exit code 1
	I0108 23:10:28.241354 1262907 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	I0108 23:10:28.241363 1262907 oci.go:664] temporary error: container missing-upgrade-955462 status is  but expect it to be exited
	I0108 23:10:28.241396 1262907 oci.go:88] couldn't shut down missing-upgrade-955462 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-955462": docker container inspect missing-upgrade-955462 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-955462
	 
	I0108 23:10:28.241468 1262907 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-955462
	I0108 23:10:28.269298 1262907 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-955462
	W0108 23:10:28.287714 1262907 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-955462 returned with exit code 1
	I0108 23:10:28.287801 1262907 cli_runner.go:164] Run: docker network inspect missing-upgrade-955462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:10:28.307434 1262907 cli_runner.go:164] Run: docker network rm missing-upgrade-955462
	I0108 23:10:28.404753 1262907 fix.go:114] Sleeping 1 second for extra luck!
	I0108 23:10:29.405728 1262907 start.go:125] createHost starting for "" (driver="docker")
	I0108 23:10:29.408356 1262907 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0108 23:10:29.408513 1262907 start.go:159] libmachine.API.Create for "missing-upgrade-955462" (driver="docker")
	I0108 23:10:29.408544 1262907 client.go:168] LocalClient.Create starting
	I0108 23:10:29.409034 1262907 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem
	I0108 23:10:29.409077 1262907 main.go:141] libmachine: Decoding PEM data...
	I0108 23:10:29.409097 1262907 main.go:141] libmachine: Parsing certificate...
	I0108 23:10:29.409157 1262907 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem
	I0108 23:10:29.409187 1262907 main.go:141] libmachine: Decoding PEM data...
	I0108 23:10:29.409201 1262907 main.go:141] libmachine: Parsing certificate...
	I0108 23:10:29.409475 1262907 cli_runner.go:164] Run: docker network inspect missing-upgrade-955462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0108 23:10:29.427061 1262907 cli_runner.go:211] docker network inspect missing-upgrade-955462 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0108 23:10:29.427167 1262907 network_create.go:281] running [docker network inspect missing-upgrade-955462] to gather additional debugging logs...
	I0108 23:10:29.427210 1262907 cli_runner.go:164] Run: docker network inspect missing-upgrade-955462
	W0108 23:10:29.445663 1262907 cli_runner.go:211] docker network inspect missing-upgrade-955462 returned with exit code 1
	I0108 23:10:29.445696 1262907 network_create.go:284] error running [docker network inspect missing-upgrade-955462]: docker network inspect missing-upgrade-955462: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-955462 not found
	I0108 23:10:29.445709 1262907 network_create.go:286] output of [docker network inspect missing-upgrade-955462]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-955462 not found
	
	** /stderr **
	I0108 23:10:29.445822 1262907 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0108 23:10:29.463019 1262907 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28dcec50f1fd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ff:2c:12:22} reservation:<nil>}
	I0108 23:10:29.463925 1262907 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-19e438b586a4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:99:34:f9:ca} reservation:<nil>}
	I0108 23:10:29.464356 1262907 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-dce8e78c486e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:04:ec:44:29} reservation:<nil>}
	I0108 23:10:29.465485 1262907 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40036dc640}
	I0108 23:10:29.465551 1262907 network_create.go:124] attempt to create docker network missing-upgrade-955462 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0108 23:10:29.465627 1262907 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-955462 missing-upgrade-955462
	I0108 23:10:29.542160 1262907 network_create.go:108] docker network missing-upgrade-955462 192.168.76.0/24 created
	I0108 23:10:29.542198 1262907 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-955462" container
	I0108 23:10:29.542276 1262907 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0108 23:10:29.559399 1262907 cli_runner.go:164] Run: docker volume create missing-upgrade-955462 --label name.minikube.sigs.k8s.io=missing-upgrade-955462 --label created_by.minikube.sigs.k8s.io=true
	I0108 23:10:29.576722 1262907 oci.go:103] Successfully created a docker volume missing-upgrade-955462
	I0108 23:10:29.576853 1262907 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-955462-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-955462 --entrypoint /usr/bin/test -v missing-upgrade-955462:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0108 23:10:30.052652 1262907 oci.go:107] Successfully prepared a docker volume missing-upgrade-955462
	I0108 23:10:30.052688 1262907 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0108 23:10:30.052862 1262907 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0108 23:10:30.053029 1262907 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0108 23:10:30.149230 1262907 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-955462 --name missing-upgrade-955462 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-955462 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-955462 --network missing-upgrade-955462 --ip 192.168.76.2 --volume missing-upgrade-955462:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0108 23:10:30.529287 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Running}}
	I0108 23:10:30.554403 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	I0108 23:10:30.590511 1262907 cli_runner.go:164] Run: docker exec missing-upgrade-955462 stat /var/lib/dpkg/alternatives/iptables
	I0108 23:10:30.662647 1262907 oci.go:144] the created container "missing-upgrade-955462" has a running status.
	I0108 23:10:30.662679 1262907 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa...
	I0108 23:10:30.970958 1262907 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0108 23:10:30.997242 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	I0108 23:10:31.030472 1262907 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0108 23:10:31.030493 1262907 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-955462 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0108 23:10:31.113289 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	I0108 23:10:31.137991 1262907 machine.go:88] provisioning docker machine ...
	I0108 23:10:31.138021 1262907 ubuntu.go:169] provisioning hostname "missing-upgrade-955462"
	I0108 23:10:31.138087 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:31.162582 1262907 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:31.163038 1262907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I0108 23:10:31.163054 1262907 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-955462 && echo "missing-upgrade-955462" | sudo tee /etc/hostname
	I0108 23:10:31.328384 1262907 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-955462
	
	I0108 23:10:31.328539 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:31.349168 1262907 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:31.349581 1262907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I0108 23:10:31.349610 1262907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-955462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-955462/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-955462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:10:31.493866 1262907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:10:31.493932 1262907 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 23:10:31.493970 1262907 ubuntu.go:177] setting up certificates
	I0108 23:10:31.494007 1262907 provision.go:83] configureAuth start
	I0108 23:10:31.494090 1262907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-955462
	I0108 23:10:31.515002 1262907 provision.go:138] copyHostCerts
	I0108 23:10:31.515061 1262907 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem, removing ...
	I0108 23:10:31.515069 1262907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 23:10:31.515144 1262907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 23:10:31.515238 1262907 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem, removing ...
	I0108 23:10:31.515243 1262907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 23:10:31.517226 1262907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 23:10:31.517340 1262907 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem, removing ...
	I0108 23:10:31.517353 1262907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 23:10:31.517392 1262907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 23:10:31.517453 1262907 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-955462 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-955462]
	I0108 23:10:31.826963 1262907 provision.go:172] copyRemoteCerts
	I0108 23:10:31.827101 1262907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:10:31.827153 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:31.849932 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	I0108 23:10:31.954536 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 23:10:31.977439 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 23:10:31.999504 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:10:32.023567 1262907 provision.go:86] duration metric: configureAuth took 529.530059ms
	I0108 23:10:32.023596 1262907 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:10:32.023779 1262907 config.go:182] Loaded profile config "missing-upgrade-955462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 23:10:32.023886 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:32.042462 1262907 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:32.042893 1262907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I0108 23:10:32.042916 1262907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:10:32.435374 1262907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:10:32.435444 1262907 machine.go:91] provisioned docker machine in 1.297433054s
	I0108 23:10:32.435469 1262907 client.go:171] LocalClient.Create took 3.026914771s
	I0108 23:10:32.435497 1262907 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-955462" took 3.02698531s
	I0108 23:10:32.435540 1262907 start.go:300] post-start starting for "missing-upgrade-955462" (driver="docker")
	I0108 23:10:32.435571 1262907 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:10:32.435665 1262907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:10:32.435745 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:32.454500 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	I0108 23:10:32.558486 1262907 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:10:32.562416 1262907 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:10:32.562443 1262907 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:10:32.562454 1262907 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:10:32.562461 1262907 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 23:10:32.562473 1262907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 23:10:32.562530 1262907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 23:10:32.562612 1262907 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> 11522512.pem in /etc/ssl/certs
	I0108 23:10:32.562720 1262907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:10:32.571493 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 23:10:32.594260 1262907 start.go:303] post-start completed in 158.685183ms
	I0108 23:10:32.594632 1262907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-955462
	I0108 23:10:32.613180 1262907 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/missing-upgrade-955462/config.json ...
	I0108 23:10:32.613469 1262907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:10:32.613524 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:32.631561 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	I0108 23:10:32.728117 1262907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:10:32.733817 1262907 start.go:128] duration metric: createHost completed in 3.328020994s
	I0108 23:10:32.733946 1262907 cli_runner.go:164] Run: docker container inspect missing-upgrade-955462 --format={{.State.Status}}
	W0108 23:10:32.751843 1262907 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:10:32.751877 1262907 machine.go:88] provisioning docker machine ...
	I0108 23:10:32.751894 1262907 ubuntu.go:169] provisioning hostname "missing-upgrade-955462"
	I0108 23:10:32.751961 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:32.770880 1262907 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:32.771306 1262907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I0108 23:10:32.771321 1262907 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-955462 && echo "missing-upgrade-955462" | sudo tee /etc/hostname
	I0108 23:10:32.920710 1262907 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-955462
	
	I0108 23:10:32.920788 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:32.940296 1262907 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:32.940705 1262907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I0108 23:10:32.940728 1262907 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-955462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-955462/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-955462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:10:33.082335 1262907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:10:33.082364 1262907 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 23:10:33.082386 1262907 ubuntu.go:177] setting up certificates
	I0108 23:10:33.082395 1262907 provision.go:83] configureAuth start
	I0108 23:10:33.082458 1262907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-955462
	I0108 23:10:33.108539 1262907 provision.go:138] copyHostCerts
	I0108 23:10:33.108607 1262907 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem, removing ...
	I0108 23:10:33.108618 1262907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 23:10:33.108692 1262907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 23:10:33.108777 1262907 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem, removing ...
	I0108 23:10:33.108783 1262907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 23:10:33.108807 1262907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 23:10:33.108855 1262907 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem, removing ...
	I0108 23:10:33.108859 1262907 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 23:10:33.108880 1262907 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 23:10:33.108963 1262907 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-955462 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-955462]
	I0108 23:10:33.226622 1262907 provision.go:172] copyRemoteCerts
	I0108 23:10:33.226697 1262907 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:10:33.226742 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:33.244769 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	I0108 23:10:33.342056 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:10:33.363882 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 23:10:33.385967 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:10:33.408793 1262907 provision.go:86] duration metric: configureAuth took 326.38408ms
	I0108 23:10:33.408821 1262907 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:10:33.409089 1262907 config.go:182] Loaded profile config "missing-upgrade-955462": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 23:10:33.409203 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:33.427280 1262907 main.go:141] libmachine: Using SSH client type: native
	I0108 23:10:33.427688 1262907 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I0108 23:10:33.427709 1262907 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:10:33.758212 1262907 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:10:33.758236 1262907 machine.go:91] provisioned docker machine in 1.006351312s
	I0108 23:10:33.758247 1262907 start.go:300] post-start starting for "missing-upgrade-955462" (driver="docker")
	I0108 23:10:33.758259 1262907 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:10:33.758325 1262907 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:10:33.758371 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:33.779764 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	I0108 23:10:33.878397 1262907 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:10:33.882399 1262907 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:10:33.882423 1262907 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:10:33.882434 1262907 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:10:33.882441 1262907 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 23:10:33.882452 1262907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 23:10:33.882509 1262907 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 23:10:33.882586 1262907 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> 11522512.pem in /etc/ssl/certs
	I0108 23:10:33.882696 1262907 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:10:33.891547 1262907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 23:10:33.915074 1262907 start.go:303] post-start completed in 156.811169ms
	I0108 23:10:33.915155 1262907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:10:33.915207 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:33.935843 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	I0108 23:10:34.031097 1262907 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:10:34.036696 1262907 fix.go:56] fixHost completed within 26.661948715s
	I0108 23:10:34.036720 1262907 start.go:83] releasing machines lock for "missing-upgrade-955462", held for 26.661997847s
	I0108 23:10:34.036793 1262907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-955462
	I0108 23:10:34.055113 1262907 ssh_runner.go:195] Run: cat /version.json
	I0108 23:10:34.055166 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:34.055230 1262907 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:10:34.055297 1262907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-955462
	I0108 23:10:34.089410 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	I0108 23:10:34.092808 1262907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/missing-upgrade-955462/id_rsa Username:docker}
	W0108 23:10:34.185474 1262907 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 23:10:34.185559 1262907 ssh_runner.go:195] Run: systemctl --version
	I0108 23:10:34.319101 1262907 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:10:34.426426 1262907 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:10:34.431876 1262907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:10:34.453591 1262907 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:10:34.453724 1262907 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:10:34.482896 1262907 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:10:34.482935 1262907 start.go:475] detecting cgroup driver to use...
	I0108 23:10:34.482987 1262907 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:10:34.483056 1262907 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:10:34.509534 1262907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:10:34.520958 1262907 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:10:34.521147 1262907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:10:34.532969 1262907 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:10:34.544466 1262907 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 23:10:34.556554 1262907 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 23:10:34.556666 1262907 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:10:34.667380 1262907 docker.go:219] disabling docker service ...
	I0108 23:10:34.667497 1262907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:10:34.681126 1262907 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:10:34.693308 1262907 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:10:34.788139 1262907 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:10:34.912910 1262907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:10:34.926214 1262907 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:10:34.943177 1262907 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 23:10:34.943247 1262907 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:10:34.956652 1262907 out.go:177] 
	W0108 23:10:34.958789 1262907 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 23:10:34.958823 1262907 out.go:239] * 
	* 
	W0108 23:10:34.959887 1262907 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 23:10:34.961832 1262907 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-955462 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2024-01-08 23:10:35.007417367 +0000 UTC m=+2435.697150689
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-955462
helpers_test.go:235: (dbg) docker inspect missing-upgrade-955462:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655de12a17ddd5b133db37f8dff1e02a34ba662e97dfd19b10068b7eaa37a5ef",
	        "Created": "2024-01-08T23:10:30.167191646Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1265119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-08T23:10:30.519914284Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/655de12a17ddd5b133db37f8dff1e02a34ba662e97dfd19b10068b7eaa37a5ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655de12a17ddd5b133db37f8dff1e02a34ba662e97dfd19b10068b7eaa37a5ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/655de12a17ddd5b133db37f8dff1e02a34ba662e97dfd19b10068b7eaa37a5ef/hosts",
	        "LogPath": "/var/lib/docker/containers/655de12a17ddd5b133db37f8dff1e02a34ba662e97dfd19b10068b7eaa37a5ef/655de12a17ddd5b133db37f8dff1e02a34ba662e97dfd19b10068b7eaa37a5ef-json.log",
	        "Name": "/missing-upgrade-955462",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-955462:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-955462",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f66d0087d91b21492026f5eeaab042766e5e788be0b4837786a5ea80973254a3-init/diff:/var/lib/docker/overlay2/5c565fc02f9fc40c36ada1ad701c8076425372509806915a92cc340352b69598/diff:/var/lib/docker/overlay2/62923341211a0ffb823a22f5cb316e2b41ce33c8ee3c679711b8b4ef50b443cd/diff:/var/lib/docker/overlay2/7d7def56b31570d417881d96f38efb052116ac136b42e66e78b86a24a9de91d0/diff:/var/lib/docker/overlay2/cf6ae910835fa35636f6fdaa77d4b526b149c380c6ae257a743d5633375f5a2f/diff:/var/lib/docker/overlay2/0a8cad2c2e796e83db446abd8c6c8bbdb2d6d058cd92ca6f7ea41fe66f87e9b5/diff:/var/lib/docker/overlay2/a9f8a82632ea47985a3d2dd8ea68dbe3fbedc8e98e622d905cf29cdb2b5b581b/diff:/var/lib/docker/overlay2/b0822bf91412bf12ab531013c46d86a93a37d94c43bb082471afd4408a77d7da/diff:/var/lib/docker/overlay2/08f55230bc3b633a870f74efea12b5a9eedf949f196498bf9ddc3d7640906804/diff:/var/lib/docker/overlay2/29a66ec5a107552529aad375f0551cd4739bf03f58e06d03fb01aa24193450ec/diff:/var/lib/docker/overlay2/94828e
4f087a4f7baba148a2ab51a2ba8232ce5b35c423c80d089fbf25e650fc/diff:/var/lib/docker/overlay2/887c25e061b0dcc8dc91d03e2a59637cda0dc141ba5341d1a35ddb862645f125/diff:/var/lib/docker/overlay2/c12a29144b35c4a2a49cf24f892d23b96b261a741f040039dd40f2a4b1eea9f1/diff:/var/lib/docker/overlay2/03194a55188dbd1dbec7507af6a8ad1c6255caddde04668d4973ef631e153013/diff:/var/lib/docker/overlay2/c24fed96036fd6ad1c9ef2006b8c1fb0264d11cbd2534168fb997e62b54862fd/diff:/var/lib/docker/overlay2/81403548d9729e58f47eafefa1a76e756fb8fe4fef6a7079243a1cb629d6aaf1/diff:/var/lib/docker/overlay2/91f5831ecde7e6f647770b87a847111fe6316d69fc25b8a8d997ce59bcb03ea5/diff:/var/lib/docker/overlay2/522d7e88c7a20650576d371267682f6ba43692ed7479f1062ac02edbd85255d1/diff:/var/lib/docker/overlay2/a76f486fa06861aeaad977dbc40216f264c5f720f15b954853e5670b618b206b/diff:/var/lib/docker/overlay2/d0de17c81bdfc126f0c188f293f419ac2a7ab6dce74d2a82a7a7d9a1b3a4f009/diff:/var/lib/docker/overlay2/0fd5f428434d838bc141c351f6600799db846c4c33e0d388f7e35332f21a356b/diff:/var/lib/d
ocker/overlay2/a73bebd19e3f4adbd3811f63922df842f5f1210c86e172c8caaa9af2f2a32a63/diff:/var/lib/docker/overlay2/e5d5f1ac171cd174e449d658924e67675cc000fc8c927436210d66eea31e910d/diff:/var/lib/docker/overlay2/7c5c94c45c769c813ea1d29ac53252b23ea684ace7af95a7f23732337674827d/diff:/var/lib/docker/overlay2/da60a72395c30fcf1ecd1f5fc3ca877af195809fca5f3ca2ff4a17ffeb3f6e32/diff:/var/lib/docker/overlay2/30b902eb5134212cda769790531bfca99dc66098321ab87667fed0963e0fe3b6/diff:/var/lib/docker/overlay2/6692de39eab503d562e2badecdc60ef07bdb69765cb1612d56f1442bd68b3f7f/diff:/var/lib/docker/overlay2/4dcdb499992192b4ff3f25b758a73f5ae6f372c9e160be5a7a34dfd363f6f5e5/diff:/var/lib/docker/overlay2/968aae618b3206477847b36861043c4bb7aa0503e2299a3a4cc2335300c5fb53/diff:/var/lib/docker/overlay2/de4f3466257af864f22fd8adbef1f55e614dda8b91604558a2159f6ee80ec664/diff:/var/lib/docker/overlay2/94c9c9c34023e111d5ffb5ef1f55df4cc66ece4da10d4205e7751228e70dd1da/diff:/var/lib/docker/overlay2/214b19dd7d0b160cbc273399970ca799d4da737a5212973bdf91d146279
d19da/diff:/var/lib/docker/overlay2/3329a4c97678a271c036380de92dc5ab6721d7e50bef2ee92d5af366b3701f11/diff:/var/lib/docker/overlay2/bb223665fff33a0360fe4f01de1f73111cc086af47d4587238149ebbc7a641f3/diff:/var/lib/docker/overlay2/7b7a9366d1fb25096ca960b2e98505c460718258a9f962e552a4ddf594900d57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f66d0087d91b21492026f5eeaab042766e5e788be0b4837786a5ea80973254a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f66d0087d91b21492026f5eeaab042766e5e788be0b4837786a5ea80973254a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f66d0087d91b21492026f5eeaab042766e5e788be0b4837786a5ea80973254a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-955462",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-955462/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-955462",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-955462",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-955462",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d5f1009e799bf68ec1dfd827841341682babae45b26ad5cf8bf4c29c1359c5b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8d5f1009e799",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-955462": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "655de12a17dd",
	                        "missing-upgrade-955462"
	                    ],
	                    "NetworkID": "1a8d741cfab5c736f60447cbdff7cc4e2c4c7b1c3ceaaecf267f3b64329232a3",
	                    "EndpointID": "6463dd90d75a22a278e632103ee8c31902b64a23098cfeff7928447919c10cff",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-955462 -n missing-upgrade-955462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-955462 -n missing-upgrade-955462: exit status 6 (362.615009ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:10:35.373248 1266121 status.go:415] kubeconfig endpoint: got: 192.168.59.71:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-955462" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-955462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-955462
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-955462: (1.874976518s)
--- FAIL: TestMissingContainerUpgrade (174.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (82.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.483638580.exe start -p stopped-upgrade-152791 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0108 23:11:43.996279 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.483638580.exe start -p stopped-upgrade-152791 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m12.482091258s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.483638580.exe -p stopped-upgrade-152791 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.483638580.exe -p stopped-upgrade-152791 stop: (2.29843653s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-152791 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0108 23:11:55.943432 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-152791 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.356814764s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-152791] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-152791 in cluster stopped-upgrade-152791
	* Pulling base image v0.0.42-1703790982-17866 ...
	* Restarting existing docker container for "stopped-upgrade-152791" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:11:53.572799 1270100 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:11:53.573024 1270100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:11:53.573037 1270100 out.go:309] Setting ErrFile to fd 2...
	I0108 23:11:53.573044 1270100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:11:53.573291 1270100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 23:11:53.573676 1270100 out.go:303] Setting JSON to false
	I0108 23:11:53.574588 1270100 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21254,"bootTime":1704734260,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 23:11:53.574660 1270100 start.go:138] virtualization:  
	I0108 23:11:53.578621 1270100 out.go:177] * [stopped-upgrade-152791] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 23:11:53.581391 1270100 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0108 23:11:53.585503 1270100 notify.go:220] Checking for updates...
	I0108 23:11:53.591183 1270100 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 23:11:53.593288 1270100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:11:53.595808 1270100 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 23:11:53.597626 1270100 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 23:11:53.599591 1270100 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 23:11:53.601566 1270100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:11:53.605706 1270100 config.go:182] Loaded profile config "stopped-upgrade-152791": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 23:11:53.608257 1270100 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 23:11:53.610077 1270100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:11:53.656252 1270100 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:11:53.656395 1270100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:11:53.823952 1270100 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 23:11:53.808843599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 23:11:53.824066 1270100 docker.go:295] overlay module found
	I0108 23:11:53.826454 1270100 out.go:177] * Using the docker driver based on existing profile
	I0108 23:11:53.829509 1270100 start.go:298] selected driver: docker
	I0108 23:11:53.829526 1270100 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-152791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-152791 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.216 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 23:11:53.829618 1270100 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:11:53.826626 1270100 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0108 23:11:53.830253 1270100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:11:53.947464 1270100 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 23:11:53.936947341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 23:11:53.947759 1270100 cni.go:84] Creating CNI manager for ""
	I0108 23:11:53.947790 1270100 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 23:11:53.947808 1270100 start_flags.go:321] config:
	{Name:stopped-upgrade-152791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-152791 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.216 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 23:11:53.950869 1270100 out.go:177] * Starting control plane node stopped-upgrade-152791 in cluster stopped-upgrade-152791
	I0108 23:11:53.952721 1270100 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 23:11:53.954836 1270100 out.go:177] * Pulling base image v0.0.42-1703790982-17866 ...
	I0108 23:11:53.956504 1270100 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0108 23:11:53.956678 1270100 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0108 23:11:53.975813 1270100 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0108 23:11:53.975837 1270100 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0108 23:11:54.023113 1270100 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0108 23:11:54.023277 1270100 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/stopped-upgrade-152791/config.json ...
	I0108 23:11:54.023582 1270100 cache.go:194] Successfully downloaded all kic artifacts
	I0108 23:11:54.023642 1270100 start.go:365] acquiring machines lock for stopped-upgrade-152791: {Name:mk2d3f21bda003f85f99283f3eecb1afc742d509 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.023709 1270100 start.go:369] acquired machines lock for "stopped-upgrade-152791" in 36.578µs
	I0108 23:11:54.023731 1270100 start.go:96] Skipping create...Using existing machine configuration
	I0108 23:11:54.023739 1270100 fix.go:54] fixHost starting: 
	I0108 23:11:54.024088 1270100 cli_runner.go:164] Run: docker container inspect stopped-upgrade-152791 --format={{.State.Status}}
	I0108 23:11:54.024380 1270100 cache.go:107] acquiring lock: {Name:mk8191f0751ad02e7e922e2c4bc53476595b89dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024450 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 23:11:54.024460 1270100 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.16µs
	I0108 23:11:54.024469 1270100 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 23:11:54.024478 1270100 cache.go:107] acquiring lock: {Name:mka155ef0c2f150d02867e6959ed64f328602b3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024508 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0108 23:11:54.024514 1270100 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 36.874µs
	I0108 23:11:54.024521 1270100 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0108 23:11:54.024530 1270100 cache.go:107] acquiring lock: {Name:mk00e10018b13da342cf356705aa85e29e811147 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024556 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0108 23:11:54.024561 1270100 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.991µs
	I0108 23:11:54.024568 1270100 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0108 23:11:54.024578 1270100 cache.go:107] acquiring lock: {Name:mk04fd55195cd30eea8ba943bf4f2b6ca5847e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024603 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0108 23:11:54.024609 1270100 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 31.179µs
	I0108 23:11:54.024616 1270100 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0108 23:11:54.024625 1270100 cache.go:107] acquiring lock: {Name:mkbbd3fad7f43423e845544e96fe50e3a1721f24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024662 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0108 23:11:54.024666 1270100 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 42.847µs
	I0108 23:11:54.024673 1270100 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0108 23:11:54.024682 1270100 cache.go:107] acquiring lock: {Name:mkb97803584c6f0bc4bac659ac9219ac35c71b21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024710 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0108 23:11:54.024715 1270100 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 35.135µs
	I0108 23:11:54.024722 1270100 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0108 23:11:54.024730 1270100 cache.go:107] acquiring lock: {Name:mkefdf5dfdfab5ebeea425b5a8a6f5b068bd39b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024757 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0108 23:11:54.024762 1270100 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 33.354µs
	I0108 23:11:54.024768 1270100 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0108 23:11:54.024776 1270100 cache.go:107] acquiring lock: {Name:mk19e26006614dbf428ed8e1de132395c9848246 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 23:11:54.024799 1270100 cache.go:115] /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0108 23:11:54.024803 1270100 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 28.365µs
	I0108 23:11:54.024809 1270100 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0108 23:11:54.024815 1270100 cache.go:87] Successfully saved all images to host disk.
	I0108 23:11:54.064287 1270100 fix.go:102] recreateIfNeeded on stopped-upgrade-152791: state=Stopped err=<nil>
	W0108 23:11:54.064346 1270100 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 23:11:54.067183 1270100 out.go:177] * Restarting existing docker container for "stopped-upgrade-152791" ...
	I0108 23:11:54.069758 1270100 cli_runner.go:164] Run: docker start stopped-upgrade-152791
	I0108 23:11:54.451622 1270100 cli_runner.go:164] Run: docker container inspect stopped-upgrade-152791 --format={{.State.Status}}
	I0108 23:11:54.479047 1270100 kic.go:430] container "stopped-upgrade-152791" state is running.
	I0108 23:11:54.479427 1270100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-152791
	I0108 23:11:54.502478 1270100 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/stopped-upgrade-152791/config.json ...
	I0108 23:11:54.502707 1270100 machine.go:88] provisioning docker machine ...
	I0108 23:11:54.502727 1270100 ubuntu.go:169] provisioning hostname "stopped-upgrade-152791"
	I0108 23:11:54.502778 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:54.526887 1270100 main.go:141] libmachine: Using SSH client type: native
	I0108 23:11:54.527315 1270100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I0108 23:11:54.527328 1270100 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-152791 && echo "stopped-upgrade-152791" | sudo tee /etc/hostname
	I0108 23:11:54.527985 1270100 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51272->127.0.0.1:34215: read: connection reset by peer
	I0108 23:11:57.689066 1270100 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-152791
	
	I0108 23:11:57.689205 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:57.708741 1270100 main.go:141] libmachine: Using SSH client type: native
	I0108 23:11:57.709412 1270100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I0108 23:11:57.709447 1270100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-152791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-152791/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-152791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 23:11:57.850277 1270100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 23:11:57.850360 1270100 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17866-1146913/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-1146913/.minikube}
	I0108 23:11:57.850397 1270100 ubuntu.go:177] setting up certificates
	I0108 23:11:57.850432 1270100 provision.go:83] configureAuth start
	I0108 23:11:57.850518 1270100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-152791
	I0108 23:11:57.869869 1270100 provision.go:138] copyHostCerts
	I0108 23:11:57.870036 1270100 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem, removing ...
	I0108 23:11:57.870046 1270100 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem
	I0108 23:11:57.870123 1270100 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.pem (1078 bytes)
	I0108 23:11:57.870251 1270100 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem, removing ...
	I0108 23:11:57.870258 1270100 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem
	I0108 23:11:57.870285 1270100 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/cert.pem (1123 bytes)
	I0108 23:11:57.870359 1270100 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem, removing ...
	I0108 23:11:57.870364 1270100 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem
	I0108 23:11:57.870388 1270100 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-1146913/.minikube/key.pem (1675 bytes)
	I0108 23:11:57.870441 1270100 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-152791 san=[192.168.59.216 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-152791]
	I0108 23:11:58.522836 1270100 provision.go:172] copyRemoteCerts
	I0108 23:11:58.522917 1270100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 23:11:58.522960 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:58.541634 1270100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/stopped-upgrade-152791/id_rsa Username:docker}
	I0108 23:11:58.642252 1270100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 23:11:58.666329 1270100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 23:11:58.689571 1270100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 23:11:58.714156 1270100 provision.go:86] duration metric: configureAuth took 863.690652ms
	I0108 23:11:58.714227 1270100 ubuntu.go:193] setting minikube options for container-runtime
	I0108 23:11:58.714445 1270100 config.go:182] Loaded profile config "stopped-upgrade-152791": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0108 23:11:58.714581 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:58.732754 1270100 main.go:141] libmachine: Using SSH client type: native
	I0108 23:11:58.733211 1270100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfad0] 0x3c2240 <nil>  [] 0s} 127.0.0.1 34215 <nil> <nil>}
	I0108 23:11:58.733234 1270100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 23:11:59.177630 1270100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 23:11:59.177653 1270100 machine.go:91] provisioned docker machine in 4.674936754s
	I0108 23:11:59.177665 1270100 start.go:300] post-start starting for "stopped-upgrade-152791" (driver="docker")
	I0108 23:11:59.177679 1270100 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 23:11:59.177748 1270100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 23:11:59.177802 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:59.197733 1270100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/stopped-upgrade-152791/id_rsa Username:docker}
	I0108 23:11:59.298156 1270100 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 23:11:59.302366 1270100 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0108 23:11:59.302392 1270100 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0108 23:11:59.302403 1270100 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0108 23:11:59.302411 1270100 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0108 23:11:59.302421 1270100 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/addons for local assets ...
	I0108 23:11:59.302484 1270100 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-1146913/.minikube/files for local assets ...
	I0108 23:11:59.302588 1270100 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem -> 11522512.pem in /etc/ssl/certs
	I0108 23:11:59.302705 1270100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 23:11:59.311872 1270100 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/ssl/certs/11522512.pem --> /etc/ssl/certs/11522512.pem (1708 bytes)
	I0108 23:11:59.334833 1270100 start.go:303] post-start completed in 157.153084ms
	I0108 23:11:59.334914 1270100 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:11:59.334953 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:59.353576 1270100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/stopped-upgrade-152791/id_rsa Username:docker}
	I0108 23:11:59.455164 1270100 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0108 23:11:59.460743 1270100 fix.go:56] fixHost completed within 5.436996433s
	I0108 23:11:59.460768 1270100 start.go:83] releasing machines lock for "stopped-upgrade-152791", held for 5.437050283s
	I0108 23:11:59.460838 1270100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-152791
	I0108 23:11:59.478485 1270100 ssh_runner.go:195] Run: cat /version.json
	I0108 23:11:59.478535 1270100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 23:11:59.478590 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:59.478539 1270100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-152791
	I0108 23:11:59.498293 1270100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/stopped-upgrade-152791/id_rsa Username:docker}
	I0108 23:11:59.503707 1270100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34215 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/stopped-upgrade-152791/id_rsa Username:docker}
	W0108 23:11:59.664262 1270100 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 23:11:59.664404 1270100 ssh_runner.go:195] Run: systemctl --version
	I0108 23:11:59.669521 1270100 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 23:11:59.843103 1270100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 23:11:59.848572 1270100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:11:59.871711 1270100 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0108 23:11:59.871850 1270100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 23:11:59.900164 1270100 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 23:11:59.900188 1270100 start.go:475] detecting cgroup driver to use...
	I0108 23:11:59.900246 1270100 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0108 23:11:59.900316 1270100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 23:11:59.928552 1270100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 23:11:59.941863 1270100 docker.go:203] disabling cri-docker service (if available) ...
	I0108 23:11:59.941955 1270100 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 23:11:59.954282 1270100 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 23:11:59.966409 1270100 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 23:11:59.979288 1270100 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 23:11:59.979409 1270100 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 23:12:00.371408 1270100 docker.go:219] disabling docker service ...
	I0108 23:12:00.371483 1270100 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 23:12:00.418957 1270100 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 23:12:00.436197 1270100 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 23:12:00.596221 1270100 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 23:12:00.754087 1270100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 23:12:00.770132 1270100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 23:12:00.794864 1270100 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 23:12:00.794958 1270100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 23:12:00.810306 1270100 out.go:177] 
	W0108 23:12:00.812956 1270100 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 23:12:00.812987 1270100 out.go:239] * 
	* 
	W0108 23:12:00.814220 1270100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 23:12:00.816259 1270100 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-152791 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (82.14s)

                                                
                                    

Test pass (278/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.45
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 18.34
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 16.31
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.1
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
26 TestBinaryMirror 0.63
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 168.1
34 TestAddons/parallel/Registry 16.61
36 TestAddons/parallel/InspektorGadget 11.83
37 TestAddons/parallel/MetricsServer 6.11
40 TestAddons/parallel/CSI 52.76
41 TestAddons/parallel/Headlamp 14.93
42 TestAddons/parallel/CloudSpanner 7.05
43 TestAddons/parallel/LocalPath 54.04
44 TestAddons/parallel/NvidiaDevicePlugin 6.59
45 TestAddons/parallel/Yakd 5.01
48 TestAddons/serial/GCPAuth/Namespaces 0.19
49 TestAddons/StoppedEnableDisable 12.35
50 TestCertOptions 37.73
51 TestCertExpiration 246.53
53 TestForceSystemdFlag 42.74
54 TestForceSystemdEnv 46.74
60 TestErrorSpam/setup 32.4
61 TestErrorSpam/start 0.91
62 TestErrorSpam/status 1.17
63 TestErrorSpam/pause 1.91
64 TestErrorSpam/unpause 2.03
65 TestErrorSpam/stop 1.54
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 74.5
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 32.39
72 TestFunctional/serial/KubeContext 0.07
73 TestFunctional/serial/KubectlGetPods 0.1
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.83
77 TestFunctional/serial/CacheCmd/cache/add_local 1.17
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
79 TestFunctional/serial/CacheCmd/cache/list 0.08
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
81 TestFunctional/serial/CacheCmd/cache/cache_reload 2.23
82 TestFunctional/serial/CacheCmd/cache/delete 0.18
83 TestFunctional/serial/MinikubeKubectlCmd 0.17
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
85 TestFunctional/serial/ExtraConfig 36.16
86 TestFunctional/serial/ComponentHealth 0.12
87 TestFunctional/serial/LogsCmd 1.85
88 TestFunctional/serial/LogsFileCmd 1.89
89 TestFunctional/serial/InvalidService 4.3
91 TestFunctional/parallel/ConfigCmd 0.66
92 TestFunctional/parallel/DashboardCmd 9.51
93 TestFunctional/parallel/DryRun 0.51
94 TestFunctional/parallel/InternationalLanguage 0.24
95 TestFunctional/parallel/StatusCmd 1.35
99 TestFunctional/parallel/ServiceCmdConnect 10.65
100 TestFunctional/parallel/AddonsCmd 0.23
101 TestFunctional/parallel/PersistentVolumeClaim 25.58
103 TestFunctional/parallel/SSHCmd 0.78
104 TestFunctional/parallel/CpCmd 2.86
106 TestFunctional/parallel/FileSync 0.65
107 TestFunctional/parallel/CertSync 2.71
111 TestFunctional/parallel/NodeLabels 0.29
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.92
115 TestFunctional/parallel/License 0.34
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.72
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.57
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
129 TestFunctional/parallel/ProfileCmd/profile_list 0.48
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
131 TestFunctional/parallel/MountCmd/any-port 8.7
132 TestFunctional/parallel/ServiceCmd/List 0.7
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
135 TestFunctional/parallel/ServiceCmd/Format 0.43
136 TestFunctional/parallel/ServiceCmd/URL 0.44
137 TestFunctional/parallel/MountCmd/specific-port 2.36
138 TestFunctional/parallel/MountCmd/VerifyCleanup 2.45
139 TestFunctional/parallel/Version/short 0.09
140 TestFunctional/parallel/Version/components 1.33
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
145 TestFunctional/parallel/ImageCommands/ImageBuild 3.1
146 TestFunctional/parallel/ImageCommands/Setup 2.39
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.28
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3
152 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.59
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.94
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.27
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
157 TestFunctional/delete_addon-resizer_images 0.08
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 94.36
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.94
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.7
170 TestJSONOutput/start/Command 78.57
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.84
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.75
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.92
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.29
195 TestKicCustomNetwork/create_custom_network 54.54
196 TestKicCustomNetwork/use_default_bridge_network 35.14
197 TestKicExistingNetwork 39.54
198 TestKicCustomSubnet 36.2
199 TestKicStaticIP 38.11
200 TestMainNoArgs 0.07
201 TestMinikubeProfile 68.33
204 TestMountStart/serial/StartWithMountFirst 8.08
205 TestMountStart/serial/VerifyMountFirst 0.3
206 TestMountStart/serial/StartWithMountSecond 8.52
207 TestMountStart/serial/VerifyMountSecond 0.29
208 TestMountStart/serial/DeleteFirst 1.68
209 TestMountStart/serial/VerifyMountPostDelete 0.29
210 TestMountStart/serial/Stop 1.24
211 TestMountStart/serial/RestartStopped 7.99
212 TestMountStart/serial/VerifyMountPostStop 0.29
215 TestMultiNode/serial/FreshStart2Nodes 130.34
216 TestMultiNode/serial/DeployApp2Nodes 5.11
218 TestMultiNode/serial/AddNode 48.1
219 TestMultiNode/serial/MultiNodeLabels 0.11
220 TestMultiNode/serial/ProfileList 0.36
221 TestMultiNode/serial/CopyFile 11.52
222 TestMultiNode/serial/StopNode 2.38
223 TestMultiNode/serial/StartAfterStop 13.58
224 TestMultiNode/serial/RestartKeepsNodes 124.5
225 TestMultiNode/serial/DeleteNode 5.24
226 TestMultiNode/serial/StopMultiNode 24.05
227 TestMultiNode/serial/RestartMultiNode 80.73
228 TestMultiNode/serial/ValidateNameConflict 35.28
233 TestPreload 178.44
235 TestScheduledStopUnix 111.31
238 TestInsufficientStorage 13.75
241 TestKubernetesUpgrade 421.05
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
245 TestNoKubernetes/serial/StartWithK8s 43.31
246 TestNoKubernetes/serial/StartWithStopK8s 9.61
247 TestNoKubernetes/serial/Start 10.98
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
249 TestNoKubernetes/serial/ProfileList 1.12
250 TestNoKubernetes/serial/Stop 1.32
251 TestNoKubernetes/serial/StartNoArgs 7.89
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.53
253 TestStoppedBinaryUpgrade/Setup 1.42
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
264 TestPause/serial/Start 52.69
265 TestPause/serial/SecondStartNoReconfiguration 30.25
266 TestPause/serial/Pause 1.27
267 TestPause/serial/VerifyStatus 0.56
268 TestPause/serial/Unpause 1.35
269 TestPause/serial/PauseAgain 1.56
270 TestPause/serial/DeletePaused 3.09
271 TestPause/serial/VerifyDeletedResources 0.74
279 TestNetworkPlugins/group/false 4.52
284 TestStartStop/group/old-k8s-version/serial/FirstStart 128.37
285 TestStartStop/group/old-k8s-version/serial/DeployApp 10.5
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
287 TestStartStop/group/old-k8s-version/serial/Stop 12.24
288 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
289 TestStartStop/group/old-k8s-version/serial/SecondStart 441.71
291 TestStartStop/group/no-preload/serial/FirstStart 70.41
292 TestStartStop/group/no-preload/serial/DeployApp 8.4
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
294 TestStartStop/group/no-preload/serial/Stop 12.02
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
296 TestStartStop/group/no-preload/serial/SecondStart 366.72
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
300 TestStartStop/group/old-k8s-version/serial/Pause 3.75
302 TestStartStop/group/embed-certs/serial/FirstStart 82.25
303 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
304 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.18
305 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
306 TestStartStop/group/no-preload/serial/Pause 4.36
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.39
309 TestStartStop/group/embed-certs/serial/DeployApp 9.41
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
311 TestStartStop/group/embed-certs/serial/Stop 12.02
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
313 TestStartStop/group/embed-certs/serial/SecondStart 632.78
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.68
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.33
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
318 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 600.86
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
324 TestStartStop/group/embed-certs/serial/Pause 3.54
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.73
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.79
328 TestStartStop/group/newest-cni/serial/FirstStart 53.6
329 TestNetworkPlugins/group/auto/Start 84.52
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
332 TestStartStop/group/newest-cni/serial/Stop 1.4
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
334 TestStartStop/group/newest-cni/serial/SecondStart 32.49
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
338 TestStartStop/group/newest-cni/serial/Pause 3.62
339 TestNetworkPlugins/group/auto/KubeletFlags 0.41
340 TestNetworkPlugins/group/auto/NetCatPod 13.45
341 TestNetworkPlugins/group/kindnet/Start 82.11
342 TestNetworkPlugins/group/auto/DNS 0.19
343 TestNetworkPlugins/group/auto/Localhost 0.16
344 TestNetworkPlugins/group/auto/HairPin 0.16
345 TestNetworkPlugins/group/calico/Start 78
346 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
347 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
348 TestNetworkPlugins/group/kindnet/NetCatPod 13.29
349 TestNetworkPlugins/group/kindnet/DNS 0.23
350 TestNetworkPlugins/group/kindnet/Localhost 0.18
351 TestNetworkPlugins/group/kindnet/HairPin 0.18
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.46
354 TestNetworkPlugins/group/calico/NetCatPod 12.41
355 TestNetworkPlugins/group/custom-flannel/Start 77.11
356 TestNetworkPlugins/group/calico/DNS 0.27
357 TestNetworkPlugins/group/calico/Localhost 0.24
358 TestNetworkPlugins/group/calico/HairPin 0.28
359 TestNetworkPlugins/group/enable-default-cni/Start 92.41
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
362 TestNetworkPlugins/group/custom-flannel/DNS 0.21
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
365 TestNetworkPlugins/group/flannel/Start 65.63
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.37
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
371 TestNetworkPlugins/group/bridge/Start 87.63
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.49
374 TestNetworkPlugins/group/flannel/NetCatPod 12.32
375 TestNetworkPlugins/group/flannel/DNS 0.2
376 TestNetworkPlugins/group/flannel/Localhost 0.24
377 TestNetworkPlugins/group/flannel/HairPin 0.21
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.53
379 TestNetworkPlugins/group/bridge/NetCatPod 10.36
380 TestNetworkPlugins/group/bridge/DNS 0.19
381 TestNetworkPlugins/group/bridge/Localhost 0.16
382 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (16.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-248557 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-248557 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.450805543s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-248557
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-248557: exit status 85 (88.744608ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-248557 | jenkins | v1.32.0 | 08 Jan 24 22:29 UTC |          |
	|         | -p download-only-248557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:29:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:29:59.425703 1152257 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:29:59.425959 1152257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:29:59.425987 1152257 out.go:309] Setting ErrFile to fd 2...
	I0108 22:29:59.426015 1152257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:29:59.426327 1152257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	W0108 22:29:59.426486 1152257 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-1146913/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-1146913/.minikube/config/config.json: no such file or directory
	I0108 22:29:59.427026 1152257 out.go:303] Setting JSON to true
	I0108 22:29:59.428041 1152257 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18740,"bootTime":1704734260,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:29:59.428162 1152257 start.go:138] virtualization:  
	I0108 22:29:59.431175 1152257 out.go:97] [download-only-248557] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 22:29:59.433300 1152257 out.go:169] MINIKUBE_LOCATION=17866
	W0108 22:29:59.431432 1152257 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 22:29:59.431493 1152257 notify.go:220] Checking for updates...
	I0108 22:29:59.436661 1152257 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:29:59.438462 1152257 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:29:59.440245 1152257 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:29:59.442143 1152257 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 22:29:59.445819 1152257 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:29:59.446085 1152257 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:29:59.469614 1152257 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:29:59.469726 1152257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:29:59.550419 1152257 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 22:29:59.540901407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:29:59.550524 1152257 docker.go:295] overlay module found
	I0108 22:29:59.552554 1152257 out.go:97] Using the docker driver based on user configuration
	I0108 22:29:59.552584 1152257 start.go:298] selected driver: docker
	I0108 22:29:59.552591 1152257 start.go:902] validating driver "docker" against <nil>
	I0108 22:29:59.552681 1152257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:29:59.617955 1152257 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-08 22:29:59.608158193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:29:59.618108 1152257 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 22:29:59.618371 1152257 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0108 22:29:59.618536 1152257 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 22:29:59.620468 1152257 out.go:169] Using Docker driver with root privileges
	I0108 22:29:59.622195 1152257 cni.go:84] Creating CNI manager for ""
	I0108 22:29:59.622213 1152257 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:29:59.622224 1152257 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 22:29:59.622239 1152257 start_flags.go:321] config:
	{Name:download-only-248557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-248557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:29:59.624155 1152257 out.go:97] Starting control plane node download-only-248557 in cluster download-only-248557
	I0108 22:29:59.624175 1152257 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:29:59.625884 1152257 out.go:97] Pulling base image v0.0.42-1703790982-17866 ...
	I0108 22:29:59.625910 1152257 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:29:59.626059 1152257 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 22:29:59.642694 1152257 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 22:29:59.643460 1152257 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory
	I0108 22:29:59.643562 1152257 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 22:29:59.692639 1152257 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0108 22:29:59.692666 1152257 cache.go:56] Caching tarball of preloaded images
	I0108 22:29:59.693175 1152257 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 22:29:59.695538 1152257 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 22:29:59.695557 1152257 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:29:59.804859 1152257 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0108 22:30:05.541910 1152257 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d as a tarball
	I0108 22:30:13.687316 1152257 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:30:13.687420 1152257 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-248557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (18.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-248557 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-248557 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (18.339941637s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (18.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-248557
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-248557: exit status 85 (90.249074ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-248557 | jenkins | v1.32.0 | 08 Jan 24 22:29 UTC |          |
	|         | -p download-only-248557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-248557 | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |          |
	|         | -p download-only-248557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:30:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:30:15.967464 1152330 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:30:15.967665 1152330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:15.967694 1152330 out.go:309] Setting ErrFile to fd 2...
	I0108 22:30:15.967716 1152330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:15.968007 1152330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	W0108 22:30:15.968190 1152330 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-1146913/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-1146913/.minikube/config/config.json: no such file or directory
	I0108 22:30:15.968505 1152330 out.go:303] Setting JSON to true
	I0108 22:30:15.969391 1152330 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18756,"bootTime":1704734260,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:30:15.969493 1152330 start.go:138] virtualization:  
	I0108 22:30:15.971921 1152330 out.go:97] [download-only-248557] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 22:30:15.974248 1152330 out.go:169] MINIKUBE_LOCATION=17866
	I0108 22:30:15.972254 1152330 notify.go:220] Checking for updates...
	I0108 22:30:15.977854 1152330 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:30:15.979867 1152330 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:30:15.982014 1152330 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:30:15.983941 1152330 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 22:30:15.987817 1152330 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:30:15.988365 1152330 config.go:182] Loaded profile config "download-only-248557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0108 22:30:15.988464 1152330 start.go:810] api.Load failed for download-only-248557: filestore "download-only-248557": Docker machine "download-only-248557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:30:15.988567 1152330 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:30:15.988597 1152330 start.go:810] api.Load failed for download-only-248557: filestore "download-only-248557": Docker machine "download-only-248557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:30:16.016929 1152330 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:30:16.017079 1152330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:30:16.098940 1152330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:30:16.088959365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:30:16.099057 1152330 docker.go:295] overlay module found
	I0108 22:30:16.101190 1152330 out.go:97] Using the docker driver based on existing profile
	I0108 22:30:16.101219 1152330 start.go:298] selected driver: docker
	I0108 22:30:16.101226 1152330 start.go:902] validating driver "docker" against &{Name:download-only-248557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-248557 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:30:16.101405 1152330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:30:16.168410 1152330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:30:16.158823198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:30:16.168895 1152330 cni.go:84] Creating CNI manager for ""
	I0108 22:30:16.168915 1152330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:30:16.168929 1152330 start_flags.go:321] config:
	{Name:download-only-248557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-248557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:30:16.170865 1152330 out.go:97] Starting control plane node download-only-248557 in cluster download-only-248557
	I0108 22:30:16.170883 1152330 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:30:16.172673 1152330 out.go:97] Pulling base image v0.0.42-1703790982-17866 ...
	I0108 22:30:16.172696 1152330 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:30:16.172773 1152330 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 22:30:16.189917 1152330 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 22:30:16.190064 1152330 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory
	I0108 22:30:16.190088 1152330 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory, skipping pull
	I0108 22:30:16.190096 1152330 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in cache, skipping pull
	I0108 22:30:16.190104 1152330 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d as a tarball
	I0108 22:30:16.238051 1152330 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0108 22:30:16.238071 1152330 cache.go:56] Caching tarball of preloaded images
	I0108 22:30:16.238679 1152330 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 22:30:16.240984 1152330 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 22:30:16.241019 1152330 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:30:16.360856 1152330 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-248557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (16.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-248557 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-248557 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.305732466s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (16.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-248557
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-248557: exit status 85 (97.036572ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-248557 | jenkins | v1.32.0 | 08 Jan 24 22:29 UTC |          |
	|         | -p download-only-248557           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-248557 | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |          |
	|         | -p download-only-248557           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-248557 | jenkins | v1.32.0 | 08 Jan 24 22:30 UTC |          |
	|         | -p download-only-248557           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:30:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:30:34.398907 1152404 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:30:34.399084 1152404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:34.399092 1152404 out.go:309] Setting ErrFile to fd 2...
	I0108 22:30:34.399099 1152404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:30:34.399341 1152404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	W0108 22:30:34.399497 1152404 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-1146913/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-1146913/.minikube/config/config.json: no such file or directory
	I0108 22:30:34.399737 1152404 out.go:303] Setting JSON to true
	I0108 22:30:34.400581 1152404 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18775,"bootTime":1704734260,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:30:34.400654 1152404 start.go:138] virtualization:  
	I0108 22:30:34.403254 1152404 out.go:97] [download-only-248557] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 22:30:34.405437 1152404 out.go:169] MINIKUBE_LOCATION=17866
	I0108 22:30:34.403531 1152404 notify.go:220] Checking for updates...
	I0108 22:30:34.407489 1152404 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:30:34.409318 1152404 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:30:34.411031 1152404 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:30:34.413292 1152404 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0108 22:30:34.417438 1152404 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:30:34.417956 1152404 config.go:182] Loaded profile config "download-only-248557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 22:30:34.418037 1152404 start.go:810] api.Load failed for download-only-248557: filestore "download-only-248557": Docker machine "download-only-248557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:30:34.418141 1152404 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:30:34.418167 1152404 start.go:810] api.Load failed for download-only-248557: filestore "download-only-248557": Docker machine "download-only-248557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:30:34.440706 1152404 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:30:34.440820 1152404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:30:34.524576 1152404 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:30:34.513534901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:30:34.524682 1152404 docker.go:295] overlay module found
	I0108 22:30:34.526969 1152404 out.go:97] Using the docker driver based on existing profile
	I0108 22:30:34.526996 1152404 start.go:298] selected driver: docker
	I0108 22:30:34.527004 1152404 start.go:902] validating driver "docker" against &{Name:download-only-248557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-248557 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:30:34.527187 1152404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:30:34.593151 1152404 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-08 22:30:34.584155988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:30:34.593632 1152404 cni.go:84] Creating CNI manager for ""
	I0108 22:30:34.593650 1152404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0108 22:30:34.593667 1152404 start_flags.go:321] config:
	{Name:download-only-248557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-248557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:30:34.596039 1152404 out.go:97] Starting control plane node download-only-248557 in cluster download-only-248557
	I0108 22:30:34.596059 1152404 cache.go:121] Beginning downloading kic base image for docker with crio
	I0108 22:30:34.598211 1152404 out.go:97] Pulling base image v0.0.42-1703790982-17866 ...
	I0108 22:30:34.598236 1152404 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:30:34.598420 1152404 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local docker daemon
	I0108 22:30:34.614867 1152404 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d to local cache
	I0108 22:30:34.615003 1152404 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory
	I0108 22:30:34.615025 1152404 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d in local cache directory, skipping pull
	I0108 22:30:34.615032 1152404 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d exists in cache, skipping pull
	I0108 22:30:34.615041 1152404 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d as a tarball
	I0108 22:30:34.665661 1152404 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0108 22:30:34.665684 1152404 cache.go:56] Caching tarball of preloaded images
	I0108 22:30:34.666316 1152404 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 22:30:34.668630 1152404 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 22:30:34.668667 1152404 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:30:34.791444 1152404 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:307124b87428587d9288b24ec2db2592 -> /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0108 22:30:48.706254 1152404 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0108 22:30:48.706387 1152404 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-1146913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-248557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-248557
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-765092 --alsologtostderr --binary-mirror http://127.0.0.1:33297 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-765092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-765092
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-260832
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-260832: exit status 85 (88.76105ms)

                                                
                                                
-- stdout --
	* Profile "addons-260832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-260832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-260832
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-260832: exit status 85 (94.902244ms)

                                                
                                                
-- stdout --
	* Profile "addons-260832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-260832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (168.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-260832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-260832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m48.102528435s)
--- PASS: TestAddons/Setup (168.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 106.061118ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9b76r" [77da5921-b1b4-4033-a80b-87f834f8b970] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005582003s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fr2zl" [d9d359d0-f7ec-4cda-9d51-479545d9a406] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004985634s
addons_test.go:340: (dbg) Run:  kubectl --context addons-260832 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-260832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-260832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.205637585s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 ip
2024/01/08 22:33:56 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kbdh8" [7fd90689-a801-4792-be69-20047027fdf7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004478084s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-260832
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-260832: (5.824497243s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 9.531636ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-42csc" [03fc0110-e5c5-478c-b243-ee4881fde5ae] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007927037s
addons_test.go:415: (dbg) Run:  kubectl --context addons-260832 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 10.696026ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-260832 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-260832 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e9257964-9e1c-490c-bce9-cbcaeb25d7e0] Pending
helpers_test.go:344: "task-pv-pod" [e9257964-9e1c-490c-bce9-cbcaeb25d7e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e9257964-9e1c-490c-bce9-cbcaeb25d7e0] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003459911s
addons_test.go:584: (dbg) Run:  kubectl --context addons-260832 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-260832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-260832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-260832 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-260832 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-260832 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-260832 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7a272e64-abfa-411f-99c8-7e714ea191d5] Pending
helpers_test.go:344: "task-pv-pod-restore" [7a272e64-abfa-411f-99c8-7e714ea191d5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7a272e64-abfa-411f-99c8-7e714ea191d5] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004910892s
addons_test.go:626: (dbg) Run:  kubectl --context addons-260832 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-260832 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-260832 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-260832 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.861358776s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-260832 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-260832 --alsologtostderr -v=1: (1.92398316s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-sbxl9" [1d525e53-8a82-4770-8979-3a865697e71b] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-sbxl9" [1d525e53-8a82-4770-8979-3a865697e71b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-sbxl9" [1d525e53-8a82-4770-8979-3a865697e71b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003614387s
--- PASS: TestAddons/parallel/Headlamp (14.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-dmbbm" [825a539b-8e58-428e-84b2-77e9a28d5e66] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00344278s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-260832
addons_test.go:860: (dbg) Done: out/minikube-linux-arm64 addons disable cloud-spanner -p addons-260832: (1.034796786s)
--- PASS: TestAddons/parallel/CloudSpanner (7.05s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-260832 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-260832 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-260832 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0d9514e5-f671-4b73-88d3-b7fa9e59d5d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0d9514e5-f671-4b73-88d3-b7fa9e59d5d7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0d9514e5-f671-4b73-88d3-b7fa9e59d5d7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005829456s
addons_test.go:891: (dbg) Run:  kubectl --context addons-260832 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 ssh "cat /opt/local-path-provisioner/pvc-c2c49568-2563-4197-8146-3703714a5804_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-260832 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-260832 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-260832 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-260832 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.451564168s)
--- PASS: TestAddons/parallel/LocalPath (54.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ljg8m" [926d3b0d-c236-40b7-990f-4df2a22987bc] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005028469s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-260832
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-s447v" [fea97a86-5df0-4212-aa22-dea1a1aa01dd] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004473734s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-260832 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-260832 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-260832
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-260832: (12.021682646s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-260832
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-260832
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-260832
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (37.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-709394 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-709394 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.944650891s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-709394 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-709394 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-709394 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-709394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-709394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-709394: (2.033267386s)
--- PASS: TestCertOptions (37.73s)

                                                
                                    
x
+
TestCertExpiration (246.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-330804 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-330804 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.12421659s)
E0108 23:16:55.943458 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-330804 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-330804 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (24.83014537s)
helpers_test.go:175: Cleaning up "cert-expiration-330804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-330804
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-330804: (2.577804795s)
--- PASS: TestCertExpiration (246.53s)

                                                
                                    
x
+
TestForceSystemdFlag (42.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-921589 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0108 23:14:51.767991 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-921589 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.603863247s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-921589 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-921589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-921589
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-921589: (2.687375012s)
--- PASS: TestForceSystemdFlag (42.74s)

                                                
                                    
x
+
TestForceSystemdEnv (46.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-280377 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-280377 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.925002573s)
helpers_test.go:175: Cleaning up "force-systemd-env-280377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-280377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-280377: (2.814179948s)
--- PASS: TestForceSystemdEnv (46.74s)

                                                
                                    
x
+
TestErrorSpam/setup (32.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-497340 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-497340 --driver=docker  --container-runtime=crio
E0108 22:38:40.948628 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:40.955302 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:40.965552 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:40.985883 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:41.026201 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:41.106510 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:41.266881 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:41.587353 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:42.228559 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:43.509041 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:46.069659 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:38:51.189829 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:39:01.430428 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-497340 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-497340 --driver=docker  --container-runtime=crio: (32.396693331s)
--- PASS: TestErrorSpam/setup (32.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 start --dry-run
--- PASS: TestErrorSpam/start (0.91s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 unpause
--- PASS: TestErrorSpam/unpause (2.03s)

                                                
                                    
x
+
TestErrorSpam/stop (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 stop: (1.281231065s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-497340 --log_dir /tmp/nospam-497340 stop
--- PASS: TestErrorSpam/stop (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17866-1146913/.minikube/files/etc/test/nested/copy/1152251/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037488 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0108 22:39:21.910622 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:40:02.871523 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-037488 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m14.502008089s)
--- PASS: TestFunctional/serial/StartWithProxy (74.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037488 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-037488 --alsologtostderr -v=8: (32.38514927s)
functional_test.go:659: soft start took 32.389332906s for "functional-037488" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-037488 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 cache add registry.k8s.io/pause:3.1: (1.274679032s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 cache add registry.k8s.io/pause:3.3: (1.316741541s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 cache add registry.k8s.io/pause:latest: (1.239477678s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-037488 /tmp/TestFunctionalserialCacheCmdcacheadd_local3002737164/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cache add minikube-local-cache-test:functional-037488
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cache delete minikube-local-cache-test:functional-037488
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-037488
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (340.577758ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 cache reload: (1.148044632s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 kubectl -- --context functional-037488 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-037488 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037488 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 22:41:24.793093 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-037488 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.163930738s)
functional_test.go:757: restart took 36.164025179s for "functional-037488" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-037488 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 logs: (1.849500028s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 logs --file /tmp/TestFunctionalserialLogsFileCmd2901744700/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 logs --file /tmp/TestFunctionalserialLogsFileCmd2901744700/001/logs.txt: (1.893197324s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-037488 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-037488
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-037488: exit status 115 (673.206112ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31049 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-037488 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 config get cpus: exit status 14 (120.643565ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 config get cpus: exit status 14 (101.486862ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-037488 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-037488 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1176946: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037488 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-037488 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (221.614599ms)

                                                
                                                
-- stdout --
	* [functional-037488] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:42:29.150007 1176630 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:42:29.150159 1176630 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:42:29.150169 1176630 out.go:309] Setting ErrFile to fd 2...
	I0108 22:42:29.150175 1176630 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:42:29.150453 1176630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 22:42:29.150899 1176630 out.go:303] Setting JSON to false
	I0108 22:42:29.151839 1176630 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19490,"bootTime":1704734260,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:42:29.151924 1176630 start.go:138] virtualization:  
	I0108 22:42:29.154345 1176630 out.go:177] * [functional-037488] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 22:42:29.157158 1176630 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:42:29.157304 1176630 notify.go:220] Checking for updates...
	I0108 22:42:29.161390 1176630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:42:29.163415 1176630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:42:29.165374 1176630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:42:29.167416 1176630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 22:42:29.169565 1176630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:42:29.172384 1176630 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:42:29.173083 1176630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:42:29.199169 1176630 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:42:29.199312 1176630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:42:29.289632 1176630 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-08 22:42:29.27947943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:42:29.289735 1176630 docker.go:295] overlay module found
	I0108 22:42:29.292247 1176630 out.go:177] * Using the docker driver based on existing profile
	I0108 22:42:29.294884 1176630 start.go:298] selected driver: docker
	I0108 22:42:29.294903 1176630 start.go:902] validating driver "docker" against &{Name:functional-037488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-037488 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:42:29.295007 1176630 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:42:29.297736 1176630 out.go:177] 
	W0108 22:42:29.299660 1176630 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 22:42:29.301545 1176630 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037488 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-037488 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-037488 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (240.72953ms)

                                                
                                                
-- stdout --
	* [functional-037488] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:42:28.912389 1176590 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:42:28.912628 1176590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:42:28.912657 1176590 out.go:309] Setting ErrFile to fd 2...
	I0108 22:42:28.912679 1176590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:42:28.913481 1176590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 22:42:28.913949 1176590 out.go:303] Setting JSON to false
	I0108 22:42:28.915051 1176590 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19489,"bootTime":1704734260,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 22:42:28.915160 1176590 start.go:138] virtualization:  
	I0108 22:42:28.918140 1176590 out.go:177] * [functional-037488] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0108 22:42:28.920587 1176590 notify.go:220] Checking for updates...
	I0108 22:42:28.921387 1176590 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 22:42:28.923831 1176590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:42:28.925869 1176590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 22:42:28.927907 1176590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 22:42:28.929872 1176590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 22:42:28.931975 1176590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:42:28.934731 1176590 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:42:28.935741 1176590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:42:28.969706 1176590 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 22:42:28.969846 1176590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:42:29.064725 1176590 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-08 22:42:29.053233019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:42:29.064856 1176590 docker.go:295] overlay module found
	I0108 22:42:29.068314 1176590 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0108 22:42:29.071399 1176590 start.go:298] selected driver: docker
	I0108 22:42:29.071418 1176590 start.go:902] validating driver "docker" against &{Name:functional-037488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-037488 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 22:42:29.071534 1176590 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:42:29.074308 1176590 out.go:177] 
	W0108 22:42:29.076445 1176590 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 22:42:29.079092 1176590 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-037488 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-037488 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jmzzw" [f6f9ebc7-5452-4e3d-b3e7-044485ec735d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jmzzw" [f6f9ebc7-5452-4e3d-b3e7-044485ec735d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003884041s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30655
functional_test.go:1674: http://192.168.49.2:30655: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-jmzzw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30655
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ee9aeff5-cf7d-4d10-8b33-544175c32c6d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005949506s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-037488 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-037488 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-037488 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-037488 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cd481c97-6030-4753-a55b-ab357bc1d47b] Pending
helpers_test.go:344: "sp-pod" [cd481c97-6030-4753-a55b-ab357bc1d47b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cd481c97-6030-4753-a55b-ab357bc1d47b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003938982s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-037488 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-037488 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-037488 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [73d425fc-f440-446a-8d35-4bd886353537] Pending
helpers_test.go:344: "sp-pod" [73d425fc-f440-446a-8d35-4bd886353537] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004505772s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-037488 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh -n functional-037488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cp functional-037488:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd118178947/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh -n functional-037488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh -n functional-037488 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1152251/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo cat /etc/test/nested/copy/1152251/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1152251.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo cat /etc/ssl/certs/1152251.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1152251.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo cat /usr/share/ca-certificates/1152251.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/11522512.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo cat /etc/ssl/certs/11522512.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/11522512.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo cat /usr/share/ca-certificates/11522512.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-037488 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 ssh "sudo systemctl is-active docker": exit status 1 (473.566552ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 ssh "sudo systemctl is-active containerd": exit status 1 (449.496847ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-037488 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-037488 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-037488 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-037488 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1174692: os: process already finished
helpers_test.go:502: unable to terminate pid 1174542: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-037488 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-037488 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [96d71633-d9cb-4ebb-b035-24b65f8c702b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [96d71633-d9cb-4ebb-b035-24b65f8c702b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005438953s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-037488 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.136.84 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-037488 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-037488 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-037488 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-blmc8" [132f5783-aaae-466a-8d2f-bbce4964d821] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-blmc8" [132f5783-aaae-466a-8d2f-bbce4964d821] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004689155s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "411.207507ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "68.67092ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "331.716926ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "73.220559ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdany-port78994776/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704753743572904636" to /tmp/TestFunctionalparallelMountCmdany-port78994776/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704753743572904636" to /tmp/TestFunctionalparallelMountCmdany-port78994776/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704753743572904636" to /tmp/TestFunctionalparallelMountCmdany-port78994776/001/test-1704753743572904636
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.752886ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 22:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 22:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 22:42 test-1704753743572904636
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh cat /mount-9p/test-1704753743572904636
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-037488 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bb401e3b-4bd9-49c3-8006-60330ba5fc49] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bb401e3b-4bd9-49c3-8006-60330ba5fc49] Running
helpers_test.go:344: "busybox-mount" [bb401e3b-4bd9-49c3-8006-60330ba5fc49] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bb401e3b-4bd9-49c3-8006-60330ba5fc49] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004961138s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-037488 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdany-port78994776/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 service list -o json
functional_test.go:1493: Took "614.501114ms" to run "out/minikube-linux-arm64 -p functional-037488 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32063
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32063
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdspecific-port1870796721/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (470.171145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdspecific-port1870796721/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 ssh "sudo umount -f /mount-9p": exit status 1 (453.453572ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-037488 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdspecific-port1870796721/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709225243/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709225243/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709225243/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T" /mount1: (1.344357336s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-037488 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709225243/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709225243/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-037488 /tmp/TestFunctionalparallelMountCmdVerifyCleanup709225243/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 version -o=json --components: (1.333678401s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037488 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-037488
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037488 image ls --format short --alsologtostderr:
I0108 22:42:58.450127 1179098 out.go:296] Setting OutFile to fd 1 ...
I0108 22:42:58.450409 1179098 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:58.450434 1179098 out.go:309] Setting ErrFile to fd 2...
I0108 22:42:58.450456 1179098 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:58.450717 1179098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
I0108 22:42:58.451427 1179098 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:58.451580 1179098 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:58.452113 1179098 cli_runner.go:164] Run: docker container inspect functional-037488 --format={{.State.Status}}
I0108 22:42:58.479705 1179098 ssh_runner.go:195] Run: systemctl --version
I0108 22:42:58.479759 1179098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037488
I0108 22:42:58.506566 1179098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/functional-037488/id_rsa Username:docker}
I0108 22:42:58.603619 1179098 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037488 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 8aea65d81da20 | 196MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | alpine             | 74077e780ec71 | 45.3MB |
| gcr.io/google-containers/addon-resizer  | functional-037488  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037488 image ls --format table --alsologtostderr:
I0108 22:42:59.147422 1179230 out.go:296] Setting OutFile to fd 1 ...
I0108 22:42:59.147589 1179230 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:59.147612 1179230 out.go:309] Setting ErrFile to fd 2...
I0108 22:42:59.147619 1179230 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:59.148076 1179230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
I0108 22:42:59.149025 1179230 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:59.149251 1179230 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:59.149908 1179230 cli_runner.go:164] Run: docker container inspect functional-037488 --format={{.State.Status}}
I0108 22:42:59.188954 1179230 ssh_runner.go:195] Run: systemctl --version
I0108 22:42:59.189107 1179230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037488
I0108 22:42:59.220751 1179230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/functional-037488/id_rsa Username:docker}
I0108 22:42:59.328554 1179230 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037488 image ls --format json --alsologtostderr:
[{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f71424
48","repoDigests":["docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45330189"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-037488"],"size":"34114467"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9
e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigest
s":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1c
d2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
"repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684"],"repoTags":["docker.io/library/nginx:latest"],"size":"196113558"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5
d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037488 image ls --format json --alsologtostderr:
I0108 22:42:58.790440 1179158 out.go:296] Setting OutFile to fd 1 ...
I0108 22:42:58.790723 1179158 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:58.790754 1179158 out.go:309] Setting ErrFile to fd 2...
I0108 22:42:58.790776 1179158 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:58.791151 1179158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
I0108 22:42:58.791936 1179158 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:58.792169 1179158 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:58.793032 1179158 cli_runner.go:164] Run: docker container inspect functional-037488 --format={{.State.Status}}
I0108 22:42:58.813411 1179158 ssh_runner.go:195] Run: systemctl --version
I0108 22:42:58.813466 1179158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037488
I0108 22:42:58.869386 1179158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/functional-037488/id_rsa Username:docker}
I0108 22:42:58.976049 1179158 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037488 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684
repoTags:
- docker.io/library/nginx:latest
size: "196113558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-037488
size: "34114467"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "45330189"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037488 image ls --format yaml --alsologtostderr:
I0108 22:42:58.441876 1179097 out.go:296] Setting OutFile to fd 1 ...
I0108 22:42:58.442085 1179097 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:58.442095 1179097 out.go:309] Setting ErrFile to fd 2...
I0108 22:42:58.442101 1179097 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:58.442376 1179097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
I0108 22:42:58.443062 1179097 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:58.443265 1179097 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:58.443916 1179097 cli_runner.go:164] Run: docker container inspect functional-037488 --format={{.State.Status}}
I0108 22:42:58.464056 1179097 ssh_runner.go:195] Run: systemctl --version
I0108 22:42:58.464111 1179097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037488
I0108 22:42:58.493417 1179097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/functional-037488/id_rsa Username:docker}
I0108 22:42:58.591154 1179097 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-037488 ssh pgrep buildkitd: exit status 1 (383.926936ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image build -t localhost/my-image:functional-037488 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 image build -t localhost/my-image:functional-037488 testdata/build --alsologtostderr: (2.44971944s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-037488 image build -t localhost/my-image:functional-037488 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 77345a7b91f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-037488
--> b8941d42dcb
Successfully tagged localhost/my-image:functional-037488
b8941d42dcb1c46cfa4541e59c4910e5d246b730bc6df6dcdbc613c1c4776d5e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-037488 image build -t localhost/my-image:functional-037488 testdata/build --alsologtostderr:
I0108 22:42:59.138276 1179235 out.go:296] Setting OutFile to fd 1 ...
I0108 22:42:59.139195 1179235 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:59.139238 1179235 out.go:309] Setting ErrFile to fd 2...
I0108 22:42:59.139261 1179235 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 22:42:59.139599 1179235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
I0108 22:42:59.140378 1179235 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:59.142095 1179235 config.go:182] Loaded profile config "functional-037488": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 22:42:59.142698 1179235 cli_runner.go:164] Run: docker container inspect functional-037488 --format={{.State.Status}}
I0108 22:42:59.171103 1179235 ssh_runner.go:195] Run: systemctl --version
I0108 22:42:59.171162 1179235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037488
I0108 22:42:59.195834 1179235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34043 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/functional-037488/id_rsa Username:docker}
I0108 22:42:59.298955 1179235 build_images.go:151] Building image from path: /tmp/build.1817602500.tar
I0108 22:42:59.299028 1179235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 22:42:59.310039 1179235 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1817602500.tar
I0108 22:42:59.314630 1179235 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1817602500.tar: stat -c "%s %y" /var/lib/minikube/build/build.1817602500.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1817602500.tar': No such file or directory
I0108 22:42:59.314676 1179235 ssh_runner.go:362] scp /tmp/build.1817602500.tar --> /var/lib/minikube/build/build.1817602500.tar (3072 bytes)
I0108 22:42:59.350435 1179235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1817602500
I0108 22:42:59.368305 1179235 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1817602500 -xf /var/lib/minikube/build/build.1817602500.tar
I0108 22:42:59.386529 1179235 crio.go:297] Building image: /var/lib/minikube/build/build.1817602500
I0108 22:42:59.386637 1179235 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-037488 /var/lib/minikube/build/build.1817602500 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0108 22:43:01.468376 1179235 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-037488 /var/lib/minikube/build/build.1817602500 --cgroup-manager=cgroupfs: (2.081699144s)
I0108 22:43:01.468458 1179235 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1817602500
I0108 22:43:01.479504 1179235 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1817602500.tar
I0108 22:43:01.490482 1179235 build_images.go:207] Built localhost/my-image:functional-037488 from /tmp/build.1817602500.tar
I0108 22:43:01.490512 1179235 build_images.go:123] succeeded building to: functional-037488
I0108 22:43:01.490517 1179235 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/01/08 22:42:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.359044493s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-037488
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image load --daemon gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 image load --daemon gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr: (5.01895785s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image load --daemon gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 image load --daemon gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr: (2.738315649s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.667734225s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-037488
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image load --daemon gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 image load --daemon gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr: (3.640771236s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image save gcr.io/google-containers/addon-resizer:functional-037488 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image rm gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-037488 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.007854464s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-037488
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-037488 image save --daemon gcr.io/google-containers/addon-resizer:functional-037488 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-037488
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-037488
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-037488
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-037488
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (94.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-332576 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0108 22:43:40.945278 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:44:08.633850 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-332576 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m34.357493617s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (94.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-332576 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-332576 addons enable ingress --alsologtostderr -v=5: (11.944150364s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-332576 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-465958 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0108 22:48:17.864751 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:48:40.945044 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-465958 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m18.570617662s)
--- PASS: TestJSONOutput/start/Command (78.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-465958 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-465958 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-465958 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-465958 --output=json --user=testUser: (5.916515617s)
--- PASS: TestJSONOutput/stop/Command (5.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.29s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-261578 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-261578 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (100.586863ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"efe08340-fa26-4020-b210-91508e66a7a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-261578] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff9f5bbb-76aa-484b-a0d5-56809d3e4689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"30712927-9877-4c76-b4c3-87f9b3ef3bdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"427bf4a2-f555-47cb-94d6-c3379d791308","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig"}}
	{"specversion":"1.0","id":"1143cfc3-85f9-4fe6-8af7-95f46718ba41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube"}}
	{"specversion":"1.0","id":"ef42ff62-351a-4a78-931a-e7de6a4e54e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0368a590-a8e1-429a-92ce-85b9aae1d6df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4ab7b03f-55a3-43d2-9b74-3b7f9e39a49f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-261578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-261578
--- PASS: TestErrorJSONOutput (0.29s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (54.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-583914 --network=
E0108 22:49:39.785486 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:49:51.767312 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:51.772965 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:51.783250 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:51.803547 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:51.843844 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:51.924127 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:52.084309 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:52.404854 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:53.045828 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:54.326057 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:49:56.886286 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:50:02.006534 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:50:12.246704 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-583914 --network=: (52.368621886s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-583914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-583914
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-583914: (2.145469551s)
--- PASS: TestKicCustomNetwork/create_custom_network (54.54s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-379792 --network=bridge
E0108 22:50:32.726907 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-379792 --network=bridge: (33.164574863s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-379792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-379792
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-379792: (1.953250979s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.14s)

                                                
                                    
x
+
TestKicExistingNetwork (39.54s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-749466 --network=existing-network
E0108 22:51:13.687146 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-749466 --network=existing-network: (37.294572982s)
helpers_test.go:175: Cleaning up "existing-network-749466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-749466
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-749466: (2.089790272s)
--- PASS: TestKicExistingNetwork (39.54s)

                                                
                                    
x
+
TestKicCustomSubnet (36.2s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-926564 --subnet=192.168.60.0/24
E0108 22:51:55.943465 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-926564 --subnet=192.168.60.0/24: (34.08226877s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-926564 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-926564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-926564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-926564: (2.088227812s)
--- PASS: TestKicCustomSubnet (36.20s)

                                                
                                    
x
+
TestKicStaticIP (38.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-746647 --static-ip=192.168.200.200
E0108 22:52:23.627292 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 22:52:35.607417 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-746647 --static-ip=192.168.200.200: (35.774020877s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-746647 ip
helpers_test.go:175: Cleaning up "static-ip-746647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-746647
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-746647: (2.146381616s)
--- PASS: TestKicStaticIP (38.11s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-561162 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-561162 --driver=docker  --container-runtime=crio: (29.814590156s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-563755 --driver=docker  --container-runtime=crio
E0108 22:53:40.945367 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-563755 --driver=docker  --container-runtime=crio: (33.236923996s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-561162
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-563755
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-563755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-563755
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-563755: (1.990601617s)
helpers_test.go:175: Cleaning up "first-561162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-561162
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-561162: (1.978310722s)
--- PASS: TestMinikubeProfile (68.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-437452 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-437452 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.078006179s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-437452 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-439643 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-439643 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.517744666s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-439643 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-437452 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-437452 --alsologtostderr -v=5: (1.678985413s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-439643 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-439643
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-439643: (1.235987634s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-439643
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-439643: (6.991372494s)
--- PASS: TestMountStart/serial/RestartStopped (7.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-439643 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-265402 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0108 22:54:51.766927 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 22:55:03.994874 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:55:19.447839 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-265402 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m9.782095362s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-265402 -- rollout status deployment/busybox: (2.870689333s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-5qwgb -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-kcr7b -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-5qwgb -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-kcr7b -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-5qwgb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-265402 -- exec busybox-5bc68d56bd-kcr7b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-265402 -v 3 --alsologtostderr
E0108 22:56:55.943574 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-265402 -v 3 --alsologtostderr: (47.356937591s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-265402 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp testdata/cp-test.txt multinode-265402:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1972308301/001/cp-test_multinode-265402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402:/home/docker/cp-test.txt multinode-265402-m02:/home/docker/cp-test_multinode-265402_multinode-265402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m02 "sudo cat /home/docker/cp-test_multinode-265402_multinode-265402-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402:/home/docker/cp-test.txt multinode-265402-m03:/home/docker/cp-test_multinode-265402_multinode-265402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m03 "sudo cat /home/docker/cp-test_multinode-265402_multinode-265402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp testdata/cp-test.txt multinode-265402-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1972308301/001/cp-test_multinode-265402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402-m02:/home/docker/cp-test.txt multinode-265402:/home/docker/cp-test_multinode-265402-m02_multinode-265402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402 "sudo cat /home/docker/cp-test_multinode-265402-m02_multinode-265402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402-m02:/home/docker/cp-test.txt multinode-265402-m03:/home/docker/cp-test_multinode-265402-m02_multinode-265402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m03 "sudo cat /home/docker/cp-test_multinode-265402-m02_multinode-265402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp testdata/cp-test.txt multinode-265402-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1972308301/001/cp-test_multinode-265402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402-m03:/home/docker/cp-test.txt multinode-265402:/home/docker/cp-test_multinode-265402-m03_multinode-265402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402 "sudo cat /home/docker/cp-test_multinode-265402-m03_multinode-265402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 cp multinode-265402-m03:/home/docker/cp-test.txt multinode-265402-m02:/home/docker/cp-test_multinode-265402-m03_multinode-265402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 ssh -n multinode-265402-m02 "sudo cat /home/docker/cp-test_multinode-265402-m03_multinode-265402-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-265402 node stop m03: (1.239534424s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-265402 status: exit status 7 (570.025161ms)

                                                
                                                
-- stdout --
	multinode-265402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-265402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-265402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr: exit status 7 (574.978793ms)

                                                
                                                
-- stdout --
	multinode-265402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-265402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-265402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 22:57:48.862087 1225710 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:57:48.862308 1225710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:57:48.862321 1225710 out.go:309] Setting ErrFile to fd 2...
	I0108 22:57:48.862328 1225710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:57:48.862622 1225710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 22:57:48.862856 1225710 out.go:303] Setting JSON to false
	I0108 22:57:48.862955 1225710 mustload.go:65] Loading cluster: multinode-265402
	I0108 22:57:48.863046 1225710 notify.go:220] Checking for updates...
	I0108 22:57:48.863446 1225710 config.go:182] Loaded profile config "multinode-265402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 22:57:48.863465 1225710 status.go:255] checking status of multinode-265402 ...
	I0108 22:57:48.864433 1225710 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 22:57:48.886333 1225710 status.go:330] multinode-265402 host status = "Running" (err=<nil>)
	I0108 22:57:48.886363 1225710 host.go:66] Checking if "multinode-265402" exists ...
	I0108 22:57:48.886671 1225710 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402
	I0108 22:57:48.906524 1225710 host.go:66] Checking if "multinode-265402" exists ...
	I0108 22:57:48.906880 1225710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 22:57:48.906930 1225710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402
	I0108 22:57:48.937792 1225710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34108 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402/id_rsa Username:docker}
	I0108 22:57:49.031727 1225710 ssh_runner.go:195] Run: systemctl --version
	I0108 22:57:49.037509 1225710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:57:49.051162 1225710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 22:57:49.129796 1225710 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-08 22:57:49.119727728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 22:57:49.130394 1225710 kubeconfig.go:92] found "multinode-265402" server: "https://192.168.58.2:8443"
	I0108 22:57:49.130418 1225710 api_server.go:166] Checking apiserver status ...
	I0108 22:57:49.130464 1225710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:57:49.143689 1225710 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I0108 22:57:49.155846 1225710 api_server.go:182] apiserver freezer: "11:freezer:/docker/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7/crio/crio-e6725dd95b9ce03f87a10787660fc07d2cd2c9b66520e84f07833c80d0c9d732"
	I0108 22:57:49.155914 1225710 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/04ff89274b48c5c39b67c554385dd41ad390610d63d121a92fa2a123c746e9e7/crio/crio-e6725dd95b9ce03f87a10787660fc07d2cd2c9b66520e84f07833c80d0c9d732/freezer.state
	I0108 22:57:49.167183 1225710 api_server.go:204] freezer state: "THAWED"
	I0108 22:57:49.167213 1225710 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0108 22:57:49.176859 1225710 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0108 22:57:49.176896 1225710 status.go:421] multinode-265402 apiserver status = Running (err=<nil>)
	I0108 22:57:49.176909 1225710 status.go:257] multinode-265402 status: &{Name:multinode-265402 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 22:57:49.176927 1225710 status.go:255] checking status of multinode-265402-m02 ...
	I0108 22:57:49.177275 1225710 cli_runner.go:164] Run: docker container inspect multinode-265402-m02 --format={{.State.Status}}
	I0108 22:57:49.195362 1225710 status.go:330] multinode-265402-m02 host status = "Running" (err=<nil>)
	I0108 22:57:49.195387 1225710 host.go:66] Checking if "multinode-265402-m02" exists ...
	I0108 22:57:49.195685 1225710 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265402-m02
	I0108 22:57:49.214275 1225710 host.go:66] Checking if "multinode-265402-m02" exists ...
	I0108 22:57:49.214596 1225710 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 22:57:49.214650 1225710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265402-m02
	I0108 22:57:49.232810 1225710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34113 SSHKeyPath:/home/jenkins/minikube-integration/17866-1146913/.minikube/machines/multinode-265402-m02/id_rsa Username:docker}
	I0108 22:57:49.327350 1225710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:57:49.341015 1225710 status.go:257] multinode-265402-m02 status: &{Name:multinode-265402-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 22:57:49.341048 1225710 status.go:255] checking status of multinode-265402-m03 ...
	I0108 22:57:49.341367 1225710 cli_runner.go:164] Run: docker container inspect multinode-265402-m03 --format={{.State.Status}}
	I0108 22:57:49.362895 1225710 status.go:330] multinode-265402-m03 host status = "Stopped" (err=<nil>)
	I0108 22:57:49.362918 1225710 status.go:343] host is not running, skipping remaining checks
	I0108 22:57:49.362925 1225710 status.go:257] multinode-265402-m03 status: &{Name:multinode-265402-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-265402 node start m03 --alsologtostderr: (12.707135769s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-265402
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-265402
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-265402: (24.950515533s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-265402 --wait=true -v=8 --alsologtostderr
E0108 22:58:40.945849 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 22:59:51.766841 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-265402 --wait=true -v=8 --alsologtostderr: (1m39.372544209s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-265402
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-265402 node delete m03: (4.47503176s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-265402 stop: (23.827510877s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-265402 status: exit status 7 (114.296113ms)

                                                
                                                
-- stdout --
	multinode-265402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-265402-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr: exit status 7 (111.884002ms)

                                                
                                                
-- stdout --
	multinode-265402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-265402-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:00:36.691821 1233921 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:00:36.692022 1233921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:00:36.692033 1233921 out.go:309] Setting ErrFile to fd 2...
	I0108 23:00:36.692039 1233921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:00:36.692335 1233921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 23:00:36.692563 1233921 out.go:303] Setting JSON to false
	I0108 23:00:36.692661 1233921 mustload.go:65] Loading cluster: multinode-265402
	I0108 23:00:36.692748 1233921 notify.go:220] Checking for updates...
	I0108 23:00:36.693155 1233921 config.go:182] Loaded profile config "multinode-265402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 23:00:36.693173 1233921 status.go:255] checking status of multinode-265402 ...
	I0108 23:00:36.694041 1233921 cli_runner.go:164] Run: docker container inspect multinode-265402 --format={{.State.Status}}
	I0108 23:00:36.713469 1233921 status.go:330] multinode-265402 host status = "Stopped" (err=<nil>)
	I0108 23:00:36.713489 1233921 status.go:343] host is not running, skipping remaining checks
	I0108 23:00:36.713497 1233921 status.go:257] multinode-265402 status: &{Name:multinode-265402 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:00:36.713531 1233921 status.go:255] checking status of multinode-265402-m02 ...
	I0108 23:00:36.713829 1233921 cli_runner.go:164] Run: docker container inspect multinode-265402-m02 --format={{.State.Status}}
	I0108 23:00:36.733913 1233921 status.go:330] multinode-265402-m02 host status = "Stopped" (err=<nil>)
	I0108 23:00:36.733935 1233921 status.go:343] host is not running, skipping remaining checks
	I0108 23:00:36.733941 1233921 status.go:257] multinode-265402-m02 status: &{Name:multinode-265402-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-265402 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0108 23:01:55.943428 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-265402 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.934421555s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-265402 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-265402
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-265402-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-265402-m02 --driver=docker  --container-runtime=crio: exit status 14 (99.994047ms)

                                                
                                                
-- stdout --
	* [multinode-265402-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-265402-m02' is duplicated with machine name 'multinode-265402-m02' in profile 'multinode-265402'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-265402-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-265402-m03 --driver=docker  --container-runtime=crio: (32.725990745s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-265402
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-265402: exit status 80 (353.379308ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-265402
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-265402-m03 already exists in multinode-265402-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-265402-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-265402-m03: (2.036004792s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.28s)

                                                
                                    
x
+
TestPreload (178.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-712932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0108 23:03:18.987524 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 23:03:40.945481 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-712932 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.670081612s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-712932 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-712932 image pull gcr.io/k8s-minikube/busybox: (2.166558806s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-712932
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-712932: (5.847287289s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-712932 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0108 23:04:51.766739 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-712932 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m23.122350723s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-712932 image list
helpers_test.go:175: Cleaning up "test-preload-712932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-712932
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-712932: (2.359278736s)
--- PASS: TestPreload (178.44s)

                                                
                                    
x
+
TestScheduledStopUnix (111.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-179494 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-179494 --memory=2048 --driver=docker  --container-runtime=crio: (35.022584156s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-179494 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-179494 -n scheduled-stop-179494
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-179494 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-179494 --cancel-scheduled
E0108 23:06:14.808050 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-179494 -n scheduled-stop-179494
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-179494
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-179494 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0108 23:06:55.943463 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-179494
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-179494: exit status 7 (92.132137ms)

                                                
                                                
-- stdout --
	scheduled-stop-179494
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-179494 -n scheduled-stop-179494
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-179494 -n scheduled-stop-179494: exit status 7 (86.925371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-179494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-179494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-179494: (4.436416715s)
--- PASS: TestScheduledStopUnix (111.31s)

                                                
                                    
x
+
TestInsufficientStorage (13.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-742395 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-742395 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.136070232s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a8b8b7c-9b62-43f0-9b17-db3b8786ba0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-742395] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"baf076f4-006b-459f-8646-3cef6da8c84a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"9dd3d55d-78a0-4bf0-8f87-45f7fa4995ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27d6b994-4887-4708-bd00-ef5ce37cdfbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig"}}
	{"specversion":"1.0","id":"98921197-4ff3-47e7-904d-620f9beb20e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube"}}
	{"specversion":"1.0","id":"5e27defa-3040-4319-a6b8-bdac2e6488b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f4099b78-d30b-4291-ab2b-67cff70a809e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21752e93-4498-434a-8552-144008151248","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"917fcd4a-9a38-450b-b9f6-7140488bbeb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5c7af882-7deb-4cb3-98b8-cd2fda824ca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1848d4d2-6b1d-4c15-91c7-a4b9ffb65439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"925c00e1-f8fa-481e-9b38-a747977ee816","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-742395 in cluster insufficient-storage-742395","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2247476-af64-4e61-9905-b58948498e0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703790982-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0787d0fc-a7d2-4e67-a7f1-9e9acdadd7eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a673cea-b4e1-431e-b357-0a6078c685f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-742395 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-742395 --output=json --layout=cluster: exit status 7 (342.338913ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-742395","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-742395","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:07:40.602008 1250724 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-742395" does not appear in /home/jenkins/minikube-integration/17866-1146913/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-742395 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-742395 --output=json --layout=cluster: exit status 7 (323.915862ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-742395","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-742395","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 23:07:40.926541 1250778 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-742395" does not appear in /home/jenkins/minikube-integration/17866-1146913/kubeconfig
	E0108 23:07:40.938730 1250778 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/insufficient-storage-742395/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-742395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-742395
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-742395: (1.941963906s)
--- PASS: TestInsufficientStorage (13.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (421.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0108 23:09:51.767233 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.03867752s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-898575
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-898575: (2.807115466s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-898575 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-898575 status --format={{.Host}}: exit status 7 (97.273856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m43.035520512s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-898575 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (131.296657ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-898575] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-898575
	    minikube start -p kubernetes-upgrade-898575 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8985752 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-898575 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-898575 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.091839785s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-898575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-898575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-898575: (2.641990287s)
--- PASS: TestKubernetesUpgrade (421.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163830 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-163830 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (102.342159ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-163830] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163830 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163830 --driver=docker  --container-runtime=crio: (42.75601611s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-163830 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163830 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163830 --no-kubernetes --driver=docker  --container-runtime=crio: (6.692962581s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-163830 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-163830 status -o json: exit status 2 (553.285601ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-163830","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-163830
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-163830: (2.359549346s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163830 --no-kubernetes --driver=docker  --container-runtime=crio
E0108 23:08:40.946099 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163830 --no-kubernetes --driver=docker  --container-runtime=crio: (10.984538486s)
--- PASS: TestNoKubernetes/serial/Start (10.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-163830 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-163830 "sudo systemctl is-active --quiet service kubelet": exit status 1 (310.054431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-163830
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-163830: (1.324170662s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-163830 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-163830 --driver=docker  --container-runtime=crio: (7.894878683s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-163830 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-163830 "sudo systemctl is-active --quiet service kubelet": exit status 1 (529.040594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-152791
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestPause/serial/Start (52.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-006199 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0108 23:13:40.946054 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-006199 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (52.689911726s)
--- PASS: TestPause/serial/Start (52.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-006199 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-006199 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.190370948s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.25s)

                                                
                                    
x
+
TestPause/serial/Pause (1.27s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-006199 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-006199 --alsologtostderr -v=5: (1.266056772s)
--- PASS: TestPause/serial/Pause (1.27s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-006199 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-006199 --output=json --layout=cluster: exit status 2 (560.087861ms)

                                                
                                                
-- stdout --
	{"Name":"pause-006199","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-006199","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.56s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.35s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-006199 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-006199 --alsologtostderr -v=5: (1.352726463s)
--- PASS: TestPause/serial/Unpause (1.35s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.56s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-006199 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-006199 --alsologtostderr -v=5: (1.562814266s)
--- PASS: TestPause/serial/PauseAgain (1.56s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-006199 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-006199 --alsologtostderr -v=5: (3.088090773s)
--- PASS: TestPause/serial/DeletePaused (3.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-006199
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-006199: exit status 1 (30.118398ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-006199: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-624506 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-624506 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (225.684161ms)

                                                
                                                
-- stdout --
	* [false-624506] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:15:38.039870 1288839 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:15:38.040111 1288839 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:15:38.040142 1288839 out.go:309] Setting ErrFile to fd 2...
	I0108 23:15:38.040166 1288839 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:15:38.040498 1288839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-1146913/.minikube/bin
	I0108 23:15:38.041034 1288839 out.go:303] Setting JSON to false
	I0108 23:15:38.042132 1288839 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":21478,"bootTime":1704734260,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0108 23:15:38.042251 1288839 start.go:138] virtualization:  
	I0108 23:15:38.045574 1288839 out.go:177] * [false-624506] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0108 23:15:38.048450 1288839 notify.go:220] Checking for updates...
	I0108 23:15:38.051439 1288839 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 23:15:38.054029 1288839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:15:38.056458 1288839 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-1146913/kubeconfig
	I0108 23:15:38.058747 1288839 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-1146913/.minikube
	I0108 23:15:38.060696 1288839 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0108 23:15:38.062707 1288839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:15:38.065614 1288839 config.go:182] Loaded profile config "kubernetes-upgrade-898575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 23:15:38.065830 1288839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:15:38.095322 1288839 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0108 23:15:38.095471 1288839 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0108 23:15:38.179227 1288839 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-08 23:15:38.168787208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0108 23:15:38.179336 1288839 docker.go:295] overlay module found
	I0108 23:15:38.181624 1288839 out.go:177] * Using the docker driver based on user configuration
	I0108 23:15:38.183893 1288839 start.go:298] selected driver: docker
	I0108 23:15:38.183919 1288839 start.go:902] validating driver "docker" against <nil>
	I0108 23:15:38.183934 1288839 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:15:38.186657 1288839 out.go:177] 
	W0108 23:15:38.189181 1288839 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 23:15:38.191314 1288839 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-624506 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-624506" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-898575
contexts:
- context:
cluster: kubernetes-upgrade-898575
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-898575
name: kubernetes-upgrade-898575
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-898575
user:
client-certificate: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kubernetes-upgrade-898575/client.crt
client-key: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kubernetes-upgrade-898575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-624506

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-624506"

                                                
                                                
----------------------- debugLogs end: false-624506 [took: 4.019906035s] --------------------------------
helpers_test.go:175: Cleaning up "false-624506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-624506
--- PASS: TestNetworkPlugins/group/false (4.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-755174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0108 23:18:40.945847 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-755174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m8.367315728s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-755174 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [41c8aeb9-6a0b-4bfe-b15e-2b0922d4dd92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [41c8aeb9-6a0b-4bfe-b15e-2b0922d4dd92] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003160982s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-755174 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-755174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-755174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015768238s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-755174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-755174 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-755174 --alsologtostderr -v=3: (12.236770747s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-755174 -n old-k8s-version-755174
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-755174 -n old-k8s-version-755174: exit status 7 (97.151277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-755174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (441.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-755174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0108 23:19:51.766680 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 23:19:58.989092 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-755174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m21.195743052s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-755174 -n old-k8s-version-755174
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (441.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-332390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-332390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m10.410152227s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-332390 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8d4cfa21-2793-472c-8d8f-b6165d385672] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8d4cfa21-2793-472c-8d8f-b6165d385672] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004397566s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-332390 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-332390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-332390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.033176584s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-332390 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-332390 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-332390 --alsologtostderr -v=3: (12.018556211s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-332390 -n no-preload-332390
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-332390 -n no-preload-332390: exit status 7 (92.009159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-332390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (366.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-332390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 23:21:55.942932 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 23:22:54.808289 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 23:23:40.945466 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 23:24:51.767300 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 23:26:55.942812 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-332390 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (6m5.835590371s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-332390 -n no-preload-332390
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (366.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-kzjn6" [509f787f-a7d8-4249-8841-b2b6ec9c5ba9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004206254s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-kzjn6" [509f787f-a7d8-4249-8841-b2b6ec9c5ba9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004260433s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-755174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-755174 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-755174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-755174 --alsologtostderr -v=1: (1.009648295s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-755174 -n old-k8s-version-755174
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-755174 -n old-k8s-version-755174: exit status 2 (404.408468ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-755174 -n old-k8s-version-755174
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-755174 -n old-k8s-version-755174: exit status 2 (385.877954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-755174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-755174 -n old-k8s-version-755174
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-755174 -n old-k8s-version-755174
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-204048 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-204048 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m22.245831492s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2n5x6" [ea104f8f-3df6-49a1-a735-6b37e306db07] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2n5x6" [ea104f8f-3df6-49a1-a735-6b37e306db07] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004136197s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2n5x6" [ea104f8f-3df6-49a1-a735-6b37e306db07] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004966213s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-332390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-332390 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-332390 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-332390 --alsologtostderr -v=1: (1.31929139s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-332390 -n no-preload-332390
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-332390 -n no-preload-332390: exit status 2 (520.898775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-332390 -n no-preload-332390
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-332390 -n no-preload-332390: exit status 2 (524.6992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-332390 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-332390 -n no-preload-332390
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-332390 -n no-preload-332390
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-678820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 23:28:23.997371 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 23:28:40.945667 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-678820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m15.390333892s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-204048 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ea46b61-cf05-41d5-83c8-cad1a1f4a86e] Pending
helpers_test.go:344: "busybox" [0ea46b61-cf05-41d5-83c8-cad1a1f4a86e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ea46b61-cf05-41d5-83c8-cad1a1f4a86e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00334665s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-204048 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-204048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-204048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.095299863s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-204048 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-204048 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-204048 --alsologtostderr -v=3: (12.015777513s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204048 -n embed-certs-204048
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204048 -n embed-certs-204048: exit status 7 (95.266968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-204048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (632.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-204048 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 23:29:21.870333 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:21.875563 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:21.885798 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:21.906059 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:21.946326 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:22.026770 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:22.187139 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:22.507706 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:29:23.148185 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-204048 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m32.248970329s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-204048 -n embed-certs-204048
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (632.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-678820 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [27c8ff4e-81a8-4f67-94a1-c24eb3de1ed8] Pending
E0108 23:29:24.429015 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
helpers_test.go:344: "busybox" [27c8ff4e-81a8-4f67-94a1-c24eb3de1ed8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 23:29:26.989250 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
helpers_test.go:344: "busybox" [27c8ff4e-81a8-4f67-94a1-c24eb3de1ed8] Running
E0108 23:29:32.109867 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004273525s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-678820 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-678820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-678820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.514791849s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-678820 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-678820 --alsologtostderr -v=3
E0108 23:29:42.350707 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-678820 --alsologtostderr -v=3: (12.330201423s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820: exit status 7 (98.264847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-678820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-678820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 23:29:51.767687 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 23:30:02.830848 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:30:43.791072 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:31:18.126194 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:18.131578 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:18.141903 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:18.162190 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:18.202490 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:18.282828 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:18.443177 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:18.764111 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:19.404953 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:20.685107 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:23.245343 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:28.366054 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:38.606618 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:31:55.943376 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 23:31:59.087068 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:32:05.711483 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:32:40.047463 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:33:40.945719 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 23:34:01.967662 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:34:21.869768 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:34:49.551621 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:34:51.767405 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
E0108 23:36:18.125506 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:36:38.989597 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 23:36:45.807906 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:36:55.942597 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
E0108 23:38:40.945104 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 23:39:21.870171 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:39:34.809171 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-678820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m0.455292886s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcd74" [a76dae0e-912d-41b4-9e7f-04f36dd3091d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003872334s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8dh4k" [a77d1dff-6cf3-4cc6-8d14-7e1e46eb6b95] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004393535s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcd74" [a76dae0e-912d-41b4-9e7f-04f36dd3091d] Running
E0108 23:39:51.767243 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003780415s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-204048 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8dh4k" [a77d1dff-6cf3-4cc6-8d14-7e1e46eb6b95] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004357564s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-678820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-204048 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-204048 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204048 -n embed-certs-204048
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204048 -n embed-certs-204048: exit status 2 (368.659025ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-204048 -n embed-certs-204048
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-204048 -n embed-certs-204048: exit status 2 (378.599467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-204048 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-204048 -n embed-certs-204048
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-204048 -n embed-certs-204048
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-678820 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-678820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-678820 --alsologtostderr -v=1: (1.001190774s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820: exit status 2 (477.259629ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820: exit status 2 (486.1869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-678820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-678820 --alsologtostderr -v=1: (1.136022402s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-678820 -n default-k8s-diff-port-678820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-972622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-972622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (53.600314681s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.519974774s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-972622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-972622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.13424686s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-972622 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-972622 --alsologtostderr -v=3: (1.402747783s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-972622 -n newest-cni-972622
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-972622 -n newest-cni-972622: exit status 7 (168.396214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-972622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-972622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 23:41:18.125783 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-972622 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (31.963187598s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-972622 -n newest-cni-972622
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-972622 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-972622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-972622 -n newest-cni-972622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-972622 -n newest-cni-972622: exit status 2 (383.169907ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-972622 -n newest-cni-972622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-972622 -n newest-cni-972622: exit status 2 (386.953183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-972622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-972622 -n newest-cni-972622
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-972622 -n newest-cni-972622
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.62s)
E0108 23:47:41.168132 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/no-preload-332390/client.crt: no such file or directory
E0108 23:47:56.990605 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-624506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-624506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hrg82" [48d2bed5-f730-4eed-add4-f24ebdb79925] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hrg82" [48d2bed5-f730-4eed-add4-f24ebdb79925] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.003663428s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m22.105766028s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-624506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m17.999542087s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hwh55" [36eaddc3-dbac-4471-a6c8-cb4a0373e051] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004768297s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-624506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-624506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9pvbd" [aa6c5c9a-de43-475f-990e-7e54f7876d7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9pvbd" [aa6c5c9a-de43-475f-990e-7e54f7876d7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004114409s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-624506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vvsfn" [8c688470-7de3-41f0-9b27-1830015a88a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006819291s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-624506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-624506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vsshn" [50857090-070a-47a6-9ca3-d893e3ec8be3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 23:43:40.945497 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vsshn" [50857090-070a-47a6-9ca3-d893e3ec8be3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004702241s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m17.113720022s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-624506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0108 23:44:21.869864 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:44:24.250283 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:24.255492 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:24.265738 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:24.285997 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:24.326126 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:24.406363 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:24.566681 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:24.887455 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:25.528374 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:26.809102 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:29.369602 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:34.490524 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:44.731305 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
E0108 23:44:51.767293 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/ingress-addon-legacy-332576/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m32.406899625s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-624506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-624506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kls66" [5cc056f3-6b53-44b1-9dcf-1c06b9423889] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 23:45:03.997838 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/addons-260832/client.crt: no such file or directory
E0108 23:45:05.211830 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kls66" [5cc056f3-6b53-44b1-9dcf-1c06b9423889] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007285843s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-624506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0108 23:45:44.912248 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/old-k8s-version-755174/client.crt: no such file or directory
E0108 23:45:46.172322 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/default-k8s-diff-port-678820/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.633161916s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-624506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-624506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s7w4k" [5948b01f-cb7b-4513-a920-ad88fadb2915] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s7w4k" [5948b01f-cb7b-4513-a920-ad88fadb2915] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004121786s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-624506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0108 23:46:35.069733 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:35.075053 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:35.085296 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:35.105607 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:35.145865 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:35.226164 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:35.386520 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:35.707040 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:36.348027 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:37.628524 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:40.188781 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-624506 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.632297089s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2n7xv" [c5debd98-f65f-4533-bebb-b16ad7402831] Running
E0108 23:46:45.309669 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.009467481s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-624506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-624506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8jtld" [5e76835c-dd66-4182-a657-04bd0e8d1ec1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8jtld" [5e76835c-dd66-4182-a657-04bd0e8d1ec1] Running
E0108 23:46:55.550165 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/auto-624506/client.crt: no such file or directory
E0108 23:46:55.942904 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/functional-037488/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004066766s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-624506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-624506 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-624506 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mztr2" [338c2d8e-7172-4f59-90aa-27920862cbfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 23:48:00.759931 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:00.765176 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:00.775476 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:00.795754 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:00.836113 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:00.916528 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:01.077142 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:01.397668 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:02.038036 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
E0108 23:48:03.319057 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-mztr2" [338c2d8e-7172-4f59-90aa-27920862cbfc] Running
E0108 23:48:05.880166 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004099829s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-624506 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0108 23:48:11.001239 1152251 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kindnet-624506/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-624506 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (32/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.63s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-760773 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-760773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-760773
--- SKIP: TestDownloadOnlyKic (0.63s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-300804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-300804
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-624506 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-624506" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-898575
contexts:
- context:
cluster: kubernetes-upgrade-898575
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-898575
name: kubernetes-upgrade-898575
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-898575
user:
client-certificate: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kubernetes-upgrade-898575/client.crt
client-key: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kubernetes-upgrade-898575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-624506

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-624506"

                                                
                                                
----------------------- debugLogs end: kubenet-624506 [took: 3.943382112s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-624506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-624506
--- SKIP: TestNetworkPlugins/group/kubenet (4.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-624506 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-624506" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17866-1146913/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-898575
contexts:
- context:
cluster: kubernetes-upgrade-898575
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-898575
name: kubernetes-upgrade-898575
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-898575
user:
client-certificate: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kubernetes-upgrade-898575/client.crt
client-key: /home/jenkins/minikube-integration/17866-1146913/.minikube/profiles/kubernetes-upgrade-898575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-624506

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-624506" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-624506"

                                                
                                                
----------------------- debugLogs end: cilium-624506 [took: 5.937489494s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-624506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-624506
--- SKIP: TestNetworkPlugins/group/cilium (6.25s)

                                                
                                    
Copied to clipboard