Test Report: Docker_Linux_crio_arm64 17363

                    
                      9401f4c578044658a0ecc50e70738aa1fc99eff9:2023-10-05:31314
                    
                

Test fail (7/301)

Order failed test Duration
28 TestAddons/parallel/Ingress 170.46
158 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.45
208 TestMultiNode/serial/PingHostFrom2Pods 4.86
229 TestRunningBinaryUpgrade 79.8
232 TestMissingContainerUpgrade 130.25
240 TestPause/serial/SecondStartNoReconfiguration 80.8
258 TestStoppedBinaryUpgrade/Upgrade 83.65
x
+
TestAddons/parallel/Ingress (170.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-792068 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-792068 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-792068 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [21aec9d8-f3f5-4251-b194-fe456aebe040] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [21aec9d8-f3f5-4251-b194-fe456aebe040] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.01264379s
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-792068 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.263421303s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-792068 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:284: (dbg) Done: kubectl --context addons-792068 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.007197261s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:295: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.050270341s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:297: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:301: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-792068 addons disable ingress-dns --alsologtostderr -v=1: (1.09713285s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-792068 addons disable ingress --alsologtostderr -v=1: (7.767567433s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-792068
helpers_test.go:235: (dbg) docker inspect addons-792068:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ccdfd021a2160674a328f14e69dd7ab7a1657091fb97834c57e3b2708569fcb",
	        "Created": "2023-10-05T21:15:38.802781058Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1454745,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:15:39.187480983Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/1ccdfd021a2160674a328f14e69dd7ab7a1657091fb97834c57e3b2708569fcb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ccdfd021a2160674a328f14e69dd7ab7a1657091fb97834c57e3b2708569fcb/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ccdfd021a2160674a328f14e69dd7ab7a1657091fb97834c57e3b2708569fcb/hosts",
	        "LogPath": "/var/lib/docker/containers/1ccdfd021a2160674a328f14e69dd7ab7a1657091fb97834c57e3b2708569fcb/1ccdfd021a2160674a328f14e69dd7ab7a1657091fb97834c57e3b2708569fcb-json.log",
	        "Name": "/addons-792068",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-792068:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-792068",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e05f5f5fb7211899f0391af8d0063e24d0d8d1106414147df746c75840fd1fb2-init/diff:/var/lib/docker/overlay2/d90b9e2f667f252141d832d5a382f20f93e3e59a1248437095891beeaafeffd3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05f5f5fb7211899f0391af8d0063e24d0d8d1106414147df746c75840fd1fb2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05f5f5fb7211899f0391af8d0063e24d0d8d1106414147df746c75840fd1fb2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05f5f5fb7211899f0391af8d0063e24d0d8d1106414147df746c75840fd1fb2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-792068",
	                "Source": "/var/lib/docker/volumes/addons-792068/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-792068",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-792068",
	                "name.minikube.sigs.k8s.io": "addons-792068",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e53b6318416cddeb687a71f387ed18a2e0b02139189ee32376909fc7feb05a7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34077"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34074"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7e53b6318416",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-792068": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1ccdfd021a21",
	                        "addons-792068"
	                    ],
	                    "NetworkID": "59c542642c2b94385ba2730e64e9f9a2d6a8ebb770b2fc316c995fa60b16c2b7",
	                    "EndpointID": "5aaf166bcec918f650cbdd81f66117562a349ecb48edd577239df0a6e5001213",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-792068 -n addons-792068
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-792068 logs -n 25: (1.725536149s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-762455   | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC |                     |
	|         | -p download-only-762455                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC | 05 Oct 23 21:15 UTC |
	| delete  | -p download-only-762455                                                                     | download-only-762455   | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC | 05 Oct 23 21:15 UTC |
	| delete  | -p download-only-762455                                                                     | download-only-762455   | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC | 05 Oct 23 21:15 UTC |
	| start   | --download-only -p                                                                          | download-docker-717480 | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC |                     |
	|         | download-docker-717480                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-717480                                                                   | download-docker-717480 | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC | 05 Oct 23 21:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-658961   | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC |                     |
	|         | binary-mirror-658961                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40319                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-658961                                                                     | binary-mirror-658961   | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC | 05 Oct 23 21:15 UTC |
	| addons  | enable dashboard -p                                                                         | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC |                     |
	|         | addons-792068                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC |                     |
	|         | addons-792068                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-792068 --wait=true                                                                | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC | 05 Oct 23 21:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-792068 ssh cat                                                                       | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	|         | /opt/local-path-provisioner/pvc-85bf44a9-7629-4bdb-ac2c-0a5f3af53dd1_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-792068 addons disable                                                                | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-792068 ip                                                                            | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	| addons  | addons-792068 addons disable                                                                | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	|         | addons-792068                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	|         | -p addons-792068                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	|         | addons-792068                                                                               |                        |         |         |                     |                     |
	| addons  | addons-792068 addons                                                                        | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC | 05 Oct 23 21:18 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-792068 ssh curl -s                                                                   | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:18 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-792068 addons                                                                        | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:19 UTC | 05 Oct 23 21:19 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-792068 addons                                                                        | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:19 UTC | 05 Oct 23 21:19 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-792068 ip                                                                            | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:21 UTC | 05 Oct 23 21:21 UTC |
	| addons  | addons-792068 addons disable                                                                | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:21 UTC | 05 Oct 23 21:21 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-792068 addons disable                                                                | addons-792068          | jenkins | v1.31.2 | 05 Oct 23 21:21 UTC | 05 Oct 23 21:21 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:15:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:15:14.697569 1454285 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:15:14.697776 1454285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:15:14.697792 1454285 out.go:309] Setting ErrFile to fd 2...
	I1005 21:15:14.697799 1454285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:15:14.698116 1454285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:15:14.698576 1454285 out.go:303] Setting JSON to false
	I1005 21:15:14.699561 1454285 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25062,"bootTime":1696515453,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:15:14.699633 1454285 start.go:138] virtualization:  
	I1005 21:15:14.702296 1454285 out.go:177] * [addons-792068] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:15:14.704982 1454285 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:15:14.707153 1454285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:15:14.705142 1454285 notify.go:220] Checking for updates...
	I1005 21:15:14.711085 1454285 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:15:14.712850 1454285 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:15:14.715145 1454285 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:15:14.716925 1454285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:15:14.718920 1454285 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:15:14.742835 1454285 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:15:14.742941 1454285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:15:14.832994 1454285 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:15:14.821921332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:15:14.833192 1454285 docker.go:294] overlay module found
	I1005 21:15:14.836356 1454285 out.go:177] * Using the docker driver based on user configuration
	I1005 21:15:14.838608 1454285 start.go:298] selected driver: docker
	I1005 21:15:14.838630 1454285 start.go:902] validating driver "docker" against <nil>
	I1005 21:15:14.838645 1454285 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:15:14.839283 1454285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:15:14.910905 1454285 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:15:14.900793728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:15:14.911069 1454285 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:15:14.911343 1454285 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 21:15:14.913824 1454285 out.go:177] * Using Docker driver with root privileges
	I1005 21:15:14.915765 1454285 cni.go:84] Creating CNI manager for ""
	I1005 21:15:14.915793 1454285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:15:14.915805 1454285 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:15:14.915823 1454285 start_flags.go:321] config:
	{Name:addons-792068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-792068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:15:14.918355 1454285 out.go:177] * Starting control plane node addons-792068 in cluster addons-792068
	I1005 21:15:14.920445 1454285 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:15:14.922736 1454285 out.go:177] * Pulling base image ...
	I1005 21:15:14.925868 1454285 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:15:14.925929 1454285 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1005 21:15:14.925944 1454285 cache.go:57] Caching tarball of preloaded images
	I1005 21:15:14.925955 1454285 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:15:14.926040 1454285 preload.go:174] Found /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1005 21:15:14.926051 1454285 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 21:15:14.926392 1454285 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/config.json ...
	I1005 21:15:14.926412 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/config.json: {Name:mk9479c1c694acc59edc5a50a4fb3d9273448549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:14.943111 1454285 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:15:14.943266 1454285 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 21:15:14.943290 1454285 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 21:15:14.943294 1454285 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 21:15:14.943302 1454285 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 21:15:14.943307 1454285 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from local cache
	I1005 21:15:31.193596 1454285 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from cached tarball
	I1005 21:15:31.193634 1454285 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:15:31.193665 1454285 start.go:365] acquiring machines lock for addons-792068: {Name:mk23bc6a6d3f591fd82ce51601966452a1e3265e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:15:31.193791 1454285 start.go:369] acquired machines lock for "addons-792068" in 106.051µs
	I1005 21:15:31.193841 1454285 start.go:93] Provisioning new machine with config: &{Name:addons-792068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-792068 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:15:31.193936 1454285 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:15:31.196286 1454285 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1005 21:15:31.196528 1454285 start.go:159] libmachine.API.Create for "addons-792068" (driver="docker")
	I1005 21:15:31.196556 1454285 client.go:168] LocalClient.Create starting
	I1005 21:15:31.196686 1454285 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem
	I1005 21:15:31.647120 1454285 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem
	I1005 21:15:32.205505 1454285 cli_runner.go:164] Run: docker network inspect addons-792068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:15:32.222366 1454285 cli_runner.go:211] docker network inspect addons-792068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:15:32.222469 1454285 network_create.go:281] running [docker network inspect addons-792068] to gather additional debugging logs...
	I1005 21:15:32.222489 1454285 cli_runner.go:164] Run: docker network inspect addons-792068
	W1005 21:15:32.239867 1454285 cli_runner.go:211] docker network inspect addons-792068 returned with exit code 1
	I1005 21:15:32.239897 1454285 network_create.go:284] error running [docker network inspect addons-792068]: docker network inspect addons-792068: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-792068 not found
	I1005 21:15:32.239909 1454285 network_create.go:286] output of [docker network inspect addons-792068]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-792068 not found
	
	** /stderr **
	I1005 21:15:32.240019 1454285 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:15:32.267417 1454285 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001351280}
	I1005 21:15:32.267454 1454285 network_create.go:124] attempt to create docker network addons-792068 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1005 21:15:32.267511 1454285 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-792068 addons-792068
	I1005 21:15:32.339922 1454285 network_create.go:108] docker network addons-792068 192.168.49.0/24 created
	I1005 21:15:32.339953 1454285 kic.go:117] calculated static IP "192.168.49.2" for the "addons-792068" container
	I1005 21:15:32.340025 1454285 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:15:32.357864 1454285 cli_runner.go:164] Run: docker volume create addons-792068 --label name.minikube.sigs.k8s.io=addons-792068 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:15:32.376138 1454285 oci.go:103] Successfully created a docker volume addons-792068
	I1005 21:15:32.376226 1454285 cli_runner.go:164] Run: docker run --rm --name addons-792068-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792068 --entrypoint /usr/bin/test -v addons-792068:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:15:34.504573 1454285 cli_runner.go:217] Completed: docker run --rm --name addons-792068-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792068 --entrypoint /usr/bin/test -v addons-792068:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (2.128293452s)
	I1005 21:15:34.504609 1454285 oci.go:107] Successfully prepared a docker volume addons-792068
	I1005 21:15:34.504630 1454285 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:15:34.504656 1454285 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:15:34.504760 1454285 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-792068:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:15:38.716934 1454285 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-792068:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.212121916s)
	I1005 21:15:38.716968 1454285 kic.go:199] duration metric: took 4.212308 seconds to extract preloaded images to volume
	W1005 21:15:38.717114 1454285 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:15:38.717225 1454285 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:15:38.786416 1454285 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-792068 --name addons-792068 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-792068 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-792068 --network addons-792068 --ip 192.168.49.2 --volume addons-792068:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:15:39.196081 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Running}}
	I1005 21:15:39.222058 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:15:39.250501 1454285 cli_runner.go:164] Run: docker exec addons-792068 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:15:39.335394 1454285 oci.go:144] the created container "addons-792068" has a running status.
	I1005 21:15:39.335421 1454285 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa...
	I1005 21:15:39.820875 1454285 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:15:39.854133 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:15:39.881089 1454285 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:15:39.881108 1454285 kic_runner.go:114] Args: [docker exec --privileged addons-792068 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:15:39.993150 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:15:40.051329 1454285 machine.go:88] provisioning docker machine ...
	I1005 21:15:40.051365 1454285 ubuntu.go:169] provisioning hostname "addons-792068"
	I1005 21:15:40.051448 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:40.091404 1454285 main.go:141] libmachine: Using SSH client type: native
	I1005 21:15:40.091854 1454285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34077 <nil> <nil>}
	I1005 21:15:40.091875 1454285 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-792068 && echo "addons-792068" | sudo tee /etc/hostname
	I1005 21:15:40.301151 1454285 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-792068
	
	I1005 21:15:40.301294 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:40.347968 1454285 main.go:141] libmachine: Using SSH client type: native
	I1005 21:15:40.348369 1454285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34077 <nil> <nil>}
	I1005 21:15:40.348395 1454285 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-792068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-792068/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-792068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:15:40.504353 1454285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:15:40.504432 1454285 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:15:40.504469 1454285 ubuntu.go:177] setting up certificates
	I1005 21:15:40.504506 1454285 provision.go:83] configureAuth start
	I1005 21:15:40.504614 1454285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792068
	I1005 21:15:40.524785 1454285 provision.go:138] copyHostCerts
	I1005 21:15:40.524864 1454285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:15:40.524977 1454285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:15:40.525038 1454285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:15:40.525081 1454285 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.addons-792068 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-792068]
	I1005 21:15:40.706222 1454285 provision.go:172] copyRemoteCerts
	I1005 21:15:40.706292 1454285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:15:40.706335 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:40.726375 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:15:40.825615 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:15:40.857094 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1005 21:15:40.886968 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 21:15:40.918290 1454285 provision.go:86] duration metric: configureAuth took 413.751759ms
	I1005 21:15:40.918315 1454285 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:15:40.918530 1454285 config.go:182] Loaded profile config "addons-792068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:15:40.918637 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:40.939444 1454285 main.go:141] libmachine: Using SSH client type: native
	I1005 21:15:40.939863 1454285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34077 <nil> <nil>}
	I1005 21:15:40.939879 1454285 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:15:41.197458 1454285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:15:41.197479 1454285 machine.go:91] provisioned docker machine in 1.146127631s
	I1005 21:15:41.197489 1454285 client.go:171] LocalClient.Create took 10.000925928s
	I1005 21:15:41.197501 1454285 start.go:167] duration metric: libmachine.API.Create for "addons-792068" took 10.000973682s
	I1005 21:15:41.197508 1454285 start.go:300] post-start starting for "addons-792068" (driver="docker")
	I1005 21:15:41.197518 1454285 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:15:41.197589 1454285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:15:41.197632 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:41.216084 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:15:41.312673 1454285 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:15:41.316853 1454285 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:15:41.316890 1454285 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:15:41.316903 1454285 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:15:41.316911 1454285 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:15:41.316921 1454285 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:15:41.316994 1454285 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:15:41.317024 1454285 start.go:303] post-start completed in 119.509039ms
	I1005 21:15:41.317421 1454285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792068
	I1005 21:15:41.335036 1454285 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/config.json ...
	I1005 21:15:41.335316 1454285 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:15:41.335368 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:41.353820 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:15:41.447582 1454285 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:15:41.453442 1454285 start.go:128] duration metric: createHost completed in 10.259488936s
	I1005 21:15:41.453468 1454285 start.go:83] releasing machines lock for "addons-792068", held for 10.259663729s
	I1005 21:15:41.453545 1454285 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-792068
	I1005 21:15:41.472585 1454285 ssh_runner.go:195] Run: cat /version.json
	I1005 21:15:41.472644 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:41.472895 1454285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:15:41.472962 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:15:41.493205 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:15:41.501853 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:15:41.581855 1454285 ssh_runner.go:195] Run: systemctl --version
	I1005 21:15:41.722426 1454285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 21:15:41.872712 1454285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:15:41.878637 1454285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:15:41.903394 1454285 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:15:41.903545 1454285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:15:41.942637 1454285 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 21:15:41.942660 1454285 start.go:469] detecting cgroup driver to use...
	I1005 21:15:41.942712 1454285 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:15:41.942792 1454285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 21:15:41.963690 1454285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 21:15:41.978965 1454285 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:15:41.979070 1454285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:15:41.995968 1454285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:15:42.029311 1454285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:15:42.163908 1454285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:15:42.292740 1454285 docker.go:213] disabling docker service ...
	I1005 21:15:42.292837 1454285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:15:42.317445 1454285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:15:42.334104 1454285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:15:42.431549 1454285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:15:42.539776 1454285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:15:42.554384 1454285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:15:42.574808 1454285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1005 21:15:42.574927 1454285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:15:42.587756 1454285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 21:15:42.587921 1454285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:15:42.600318 1454285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:15:42.612640 1454285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:15:42.625027 1454285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:15:42.636943 1454285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:15:42.648258 1454285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:15:42.658794 1454285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:15:42.753088 1454285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 21:15:42.874408 1454285 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 21:15:42.874539 1454285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 21:15:42.879448 1454285 start.go:537] Will wait 60s for crictl version
	I1005 21:15:42.879548 1454285 ssh_runner.go:195] Run: which crictl
	I1005 21:15:42.883981 1454285 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:15:42.936745 1454285 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 21:15:42.936899 1454285 ssh_runner.go:195] Run: crio --version
	I1005 21:15:42.981445 1454285 ssh_runner.go:195] Run: crio --version
	I1005 21:15:43.032235 1454285 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1005 21:15:43.034541 1454285 cli_runner.go:164] Run: docker network inspect addons-792068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:15:43.052832 1454285 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1005 21:15:43.057945 1454285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:15:43.072703 1454285 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:15:43.072774 1454285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:15:43.140458 1454285 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 21:15:43.140481 1454285 crio.go:415] Images already preloaded, skipping extraction
	I1005 21:15:43.140537 1454285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:15:43.182679 1454285 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 21:15:43.182701 1454285 cache_images.go:84] Images are preloaded, skipping loading
	I1005 21:15:43.182778 1454285 ssh_runner.go:195] Run: crio config
	I1005 21:15:43.239647 1454285 cni.go:84] Creating CNI manager for ""
	I1005 21:15:43.239673 1454285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:15:43.239735 1454285 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:15:43.239763 1454285 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-792068 NodeName:addons-792068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 21:15:43.239924 1454285 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-792068"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:15:43.239998 1454285 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-792068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-792068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:15:43.240067 1454285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 21:15:43.250948 1454285 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:15:43.251021 1454285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 21:15:43.262100 1454285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1005 21:15:43.283533 1454285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 21:15:43.305604 1454285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1005 21:15:43.327602 1454285 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:15:43.332539 1454285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:15:43.346321 1454285 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068 for IP: 192.168.49.2
	I1005 21:15:43.346354 1454285 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac5d4c0ae883432caac512ac8160283213d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:43.346507 1454285 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key
	I1005 21:15:43.606143 1454285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt ...
	I1005 21:15:43.606173 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt: {Name:mk5403b38732599fa888ef405f691e3c82b4b220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:43.606372 1454285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key ...
	I1005 21:15:43.606386 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key: {Name:mk8b5bb7cd99f6a0655a8a9bda24e936dcd436fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:43.606962 1454285 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key
	I1005 21:15:43.795553 1454285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt ...
	I1005 21:15:43.795585 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt: {Name:mk03c4d53f41087fff0696cb7502429094eb0ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:43.795763 1454285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key ...
	I1005 21:15:43.795777 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key: {Name:mke911ac8500a478c32b0e06aa5f0e65285ec481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:43.796377 1454285 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.key
	I1005 21:15:43.796402 1454285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt with IP's: []
	I1005 21:15:44.090036 1454285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt ...
	I1005 21:15:44.090065 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: {Name:mk0ec698f73f762dd3407176f15e84bdb7312cad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:44.090248 1454285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.key ...
	I1005 21:15:44.090263 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.key: {Name:mk9505e5c2597deaea9781a2499b6f27354abae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:44.090346 1454285 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.key.dd3b5fb2
	I1005 21:15:44.090364 1454285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 21:15:44.507681 1454285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.crt.dd3b5fb2 ...
	I1005 21:15:44.507717 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.crt.dd3b5fb2: {Name:mkc2747146c43b6f34ffbc761a234e616b9a9002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:44.507902 1454285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.key.dd3b5fb2 ...
	I1005 21:15:44.507915 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.key.dd3b5fb2: {Name:mk0af2b8db10d874408f56174c5fcdec4adee6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:44.508001 1454285 certs.go:337] copying /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.crt
	I1005 21:15:44.508080 1454285 certs.go:341] copying /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.key
	I1005 21:15:44.508136 1454285 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.key
	I1005 21:15:44.508158 1454285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.crt with IP's: []
	I1005 21:15:45.300079 1454285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.crt ...
	I1005 21:15:45.300122 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.crt: {Name:mk6f3725c5e676ed9e048eda370989c068e8bc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:45.300355 1454285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.key ...
	I1005 21:15:45.300365 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.key: {Name:mk902ca9dd4a66a2a567b328b1d0f49584b0ae9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:15:45.301615 1454285 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:15:45.301724 1454285 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:15:45.301887 1454285 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:15:45.301930 1454285 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem (1675 bytes)
	I1005 21:15:45.303047 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 21:15:45.348586 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 21:15:45.396919 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 21:15:45.435900 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1005 21:15:45.473998 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:15:45.509555 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 21:15:45.541975 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:15:45.571703 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:15:45.603142 1454285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:15:45.633441 1454285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 21:15:45.655443 1454285 ssh_runner.go:195] Run: openssl version
	I1005 21:15:45.663711 1454285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:15:45.675697 1454285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:15:45.680346 1454285 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:15:45.680440 1454285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:15:45.689111 1454285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:15:45.701270 1454285 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:15:45.705705 1454285 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:15:45.705809 1454285 kubeadm.go:404] StartCluster: {Name:addons-792068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-792068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:15:45.705900 1454285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1005 21:15:45.705965 1454285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:15:45.750051 1454285 cri.go:89] found id: ""
	I1005 21:15:45.750123 1454285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 21:15:45.761010 1454285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 21:15:45.772027 1454285 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 21:15:45.772116 1454285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 21:15:45.782998 1454285 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 21:15:45.783043 1454285 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 21:15:45.838717 1454285 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1005 21:15:45.839021 1454285 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 21:15:45.885166 1454285 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:15:45.885236 1454285 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:15:45.885273 1454285 kubeadm.go:322] OS: Linux
	I1005 21:15:45.885322 1454285 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 21:15:45.885387 1454285 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 21:15:45.885465 1454285 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 21:15:45.885520 1454285 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 21:15:45.885570 1454285 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 21:15:45.885620 1454285 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 21:15:45.885666 1454285 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1005 21:15:45.885715 1454285 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1005 21:15:45.885761 1454285 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1005 21:15:45.962151 1454285 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 21:15:45.962255 1454285 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 21:15:45.962347 1454285 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 21:15:46.229859 1454285 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 21:15:46.232878 1454285 out.go:204]   - Generating certificates and keys ...
	I1005 21:15:46.233006 1454285 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 21:15:46.233076 1454285 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 21:15:47.137510 1454285 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 21:15:47.513688 1454285 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 21:15:48.077323 1454285 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 21:15:48.433997 1454285 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 21:15:49.476599 1454285 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 21:15:49.476955 1454285 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-792068 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:15:49.816000 1454285 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 21:15:49.816354 1454285 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-792068 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:15:50.177836 1454285 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 21:15:50.516453 1454285 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 21:15:50.960406 1454285 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 21:15:50.960729 1454285 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 21:15:51.259573 1454285 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 21:15:51.473923 1454285 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 21:15:51.959913 1454285 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 21:15:52.925513 1454285 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 21:15:52.926151 1454285 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 21:15:52.928996 1454285 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 21:15:52.931237 1454285 out.go:204]   - Booting up control plane ...
	I1005 21:15:52.931364 1454285 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 21:15:52.931447 1454285 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 21:15:52.931881 1454285 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 21:15:52.944729 1454285 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:15:52.945923 1454285 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:15:52.946002 1454285 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 21:15:53.053328 1454285 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 21:16:00.066768 1454285 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004051 seconds
	I1005 21:16:00.066887 1454285 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 21:16:00.230250 1454285 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 21:16:00.817452 1454285 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 21:16:00.817635 1454285 kubeadm.go:322] [mark-control-plane] Marking the node addons-792068 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 21:16:01.329128 1454285 kubeadm.go:322] [bootstrap-token] Using token: y5o89d.sos9gmebjitvnlvi
	I1005 21:16:01.331060 1454285 out.go:204]   - Configuring RBAC rules ...
	I1005 21:16:01.331176 1454285 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 21:16:01.336502 1454285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 21:16:01.344911 1454285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 21:16:01.348879 1454285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 21:16:01.352835 1454285 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 21:16:01.359871 1454285 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 21:16:01.385005 1454285 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 21:16:01.639455 1454285 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 21:16:01.759982 1454285 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 21:16:01.761234 1454285 kubeadm.go:322] 
	I1005 21:16:01.761306 1454285 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 21:16:01.761315 1454285 kubeadm.go:322] 
	I1005 21:16:01.761401 1454285 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 21:16:01.761412 1454285 kubeadm.go:322] 
	I1005 21:16:01.761438 1454285 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 21:16:01.761498 1454285 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 21:16:01.761550 1454285 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 21:16:01.761559 1454285 kubeadm.go:322] 
	I1005 21:16:01.761611 1454285 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1005 21:16:01.761618 1454285 kubeadm.go:322] 
	I1005 21:16:01.761663 1454285 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 21:16:01.761672 1454285 kubeadm.go:322] 
	I1005 21:16:01.761721 1454285 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 21:16:01.761805 1454285 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 21:16:01.761874 1454285 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 21:16:01.761883 1454285 kubeadm.go:322] 
	I1005 21:16:01.761963 1454285 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 21:16:01.762042 1454285 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 21:16:01.762056 1454285 kubeadm.go:322] 
	I1005 21:16:01.762136 1454285 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y5o89d.sos9gmebjitvnlvi \
	I1005 21:16:01.762239 1454285 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d \
	I1005 21:16:01.762263 1454285 kubeadm.go:322] 	--control-plane 
	I1005 21:16:01.762268 1454285 kubeadm.go:322] 
	I1005 21:16:01.764065 1454285 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 21:16:01.764083 1454285 kubeadm.go:322] 
	I1005 21:16:01.764161 1454285 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y5o89d.sos9gmebjitvnlvi \
	I1005 21:16:01.764263 1454285 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d 
	I1005 21:16:01.766670 1454285 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:16:01.766785 1454285 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:16:01.766803 1454285 cni.go:84] Creating CNI manager for ""
	I1005 21:16:01.766811 1454285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:16:01.769137 1454285 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 21:16:01.770895 1454285 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:16:01.777454 1454285 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 21:16:01.777473 1454285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:16:01.826263 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:16:02.805992 1454285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:16:02.806148 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:02.806262 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=addons-792068 minikube.k8s.io/updated_at=2023_10_05T21_16_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:02.827227 1454285 ops.go:34] apiserver oom_adj: -16
	I1005 21:16:02.970762 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:03.121275 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:03.742974 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:04.242965 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:04.742828 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:05.242419 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:05.742249 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:06.242770 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:06.742616 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:07.242511 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:07.742880 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:08.242478 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:08.742360 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:09.243040 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:09.742317 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:10.243288 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:10.742261 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:11.242899 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:11.742588 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:12.242379 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:12.742502 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:13.242607 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:13.742566 1454285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:16:13.909716 1454285 kubeadm.go:1081] duration metric: took 11.103620236s to wait for elevateKubeSystemPrivileges.
	I1005 21:16:13.909758 1454285 kubeadm.go:406] StartCluster complete in 28.20397297s
	I1005 21:16:13.909777 1454285 settings.go:142] acquiring lock: {Name:mk7dada861cf2ca4f44d224c602a8425f2d31baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:16:13.910469 1454285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:16:13.910874 1454285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/kubeconfig: {Name:mkcdb0cb77435bcc2d7e177116f1a594e64ff454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:16:13.913353 1454285 config.go:182] Loaded profile config "addons-792068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:16:13.913406 1454285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:16:13.913599 1454285 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1005 21:16:13.913764 1454285 addons.go:69] Setting volumesnapshots=true in profile "addons-792068"
	I1005 21:16:13.913781 1454285 addons.go:231] Setting addon volumesnapshots=true in "addons-792068"
	I1005 21:16:13.913826 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.914286 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.914759 1454285 addons.go:69] Setting cloud-spanner=true in profile "addons-792068"
	I1005 21:16:13.914776 1454285 addons.go:231] Setting addon cloud-spanner=true in "addons-792068"
	I1005 21:16:13.914808 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.915195 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.915678 1454285 addons.go:69] Setting inspektor-gadget=true in profile "addons-792068"
	I1005 21:16:13.915718 1454285 addons.go:231] Setting addon inspektor-gadget=true in "addons-792068"
	I1005 21:16:13.915778 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.916257 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.916633 1454285 addons.go:69] Setting metrics-server=true in profile "addons-792068"
	I1005 21:16:13.916659 1454285 addons.go:231] Setting addon metrics-server=true in "addons-792068"
	I1005 21:16:13.916702 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.917081 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.918218 1454285 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-792068"
	I1005 21:16:13.918273 1454285 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-792068"
	I1005 21:16:13.918310 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.918706 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.922462 1454285 addons.go:69] Setting default-storageclass=true in profile "addons-792068"
	I1005 21:16:13.922499 1454285 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-792068"
	I1005 21:16:13.922825 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.924567 1454285 addons.go:69] Setting registry=true in profile "addons-792068"
	I1005 21:16:13.924589 1454285 addons.go:231] Setting addon registry=true in "addons-792068"
	I1005 21:16:13.924636 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.925049 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.941557 1454285 addons.go:69] Setting storage-provisioner=true in profile "addons-792068"
	I1005 21:16:13.941582 1454285 addons.go:231] Setting addon storage-provisioner=true in "addons-792068"
	I1005 21:16:13.941644 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.942081 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.943634 1454285 addons.go:69] Setting gcp-auth=true in profile "addons-792068"
	I1005 21:16:13.943662 1454285 mustload.go:65] Loading cluster: addons-792068
	I1005 21:16:13.943847 1454285 config.go:182] Loaded profile config "addons-792068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:16:13.944104 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.965456 1454285 addons.go:69] Setting ingress=true in profile "addons-792068"
	I1005 21:16:13.965488 1454285 addons.go:231] Setting addon ingress=true in "addons-792068"
	I1005 21:16:13.965542 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.965997 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.966338 1454285 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-792068"
	I1005 21:16:13.966358 1454285 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-792068"
	I1005 21:16:13.966606 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:13.985786 1454285 addons.go:69] Setting ingress-dns=true in profile "addons-792068"
	I1005 21:16:13.985819 1454285 addons.go:231] Setting addon ingress-dns=true in "addons-792068"
	I1005 21:16:13.985879 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:13.986313 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:14.153170 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1005 21:16:14.155518 1454285 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1005 21:16:14.207122 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1005 21:16:14.207263 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.223839 1454285 out.go:177]   - Using image docker.io/registry:2.8.1
	I1005 21:16:14.229137 1454285 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1005 21:16:14.233854 1454285 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1005 21:16:14.233879 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1005 21:16:14.233942 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.241411 1454285 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1005 21:16:14.232976 1454285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 21:16:14.222903 1454285 addons.go:231] Setting addon default-storageclass=true in "addons-792068"
	I1005 21:16:14.235573 1454285 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-792068"
	I1005 21:16:14.246756 1454285 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1005 21:16:14.248593 1454285 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1005 21:16:14.248612 1454285 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1005 21:16:14.250805 1454285 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1005 21:16:14.250826 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1005 21:16:14.250897 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.257023 1454285 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 21:16:14.257053 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1005 21:16:14.257122 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.248627 1454285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1005 21:16:14.248608 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1005 21:16:14.248604 1454285 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1005 21:16:14.248632 1454285 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:16:14.248666 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:14.248684 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:14.248693 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1005 21:16:14.255018 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:14.273521 1454285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:16:14.271715 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:14.272072 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:14.272100 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.275895 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1005 21:16:14.275913 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1005 21:16:14.275979 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.287049 1454285 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:16:14.287071 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 21:16:14.287134 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.299824 1454285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:16:14.308437 1454285 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 21:16:14.308470 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1005 21:16:14.308544 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.323890 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1005 21:16:14.311830 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.324513 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.329378 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1005 21:16:14.333523 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1005 21:16:14.335248 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1005 21:16:14.336811 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1005 21:16:14.338984 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1005 21:16:14.341141 1454285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1005 21:16:14.346508 1454285 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1005 21:16:14.346531 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1005 21:16:14.346598 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.381895 1454285 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-792068" context rescaled to 1 replicas
	I1005 21:16:14.381933 1454285 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:16:14.383863 1454285 out.go:177] * Verifying Kubernetes components...
	I1005 21:16:14.385639 1454285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:16:14.449544 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.449644 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.483480 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.490874 1454285 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 21:16:14.490894 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 21:16:14.490955 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.503025 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.503584 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.529390 1454285 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1005 21:16:14.533600 1454285 out.go:177]   - Using image docker.io/busybox:stable
	I1005 21:16:14.535807 1454285 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 21:16:14.535828 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1005 21:16:14.535900 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:14.533497 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.543485 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.577408 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.598847 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:14.848639 1454285 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1005 21:16:14.848665 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1005 21:16:14.888161 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1005 21:16:14.956542 1454285 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1005 21:16:14.956566 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1005 21:16:14.962827 1454285 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1005 21:16:14.962859 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1005 21:16:14.965252 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:16:14.966424 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1005 21:16:14.971164 1454285 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1005 21:16:14.971195 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1005 21:16:15.041324 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1005 21:16:15.041380 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1005 21:16:15.053246 1454285 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1005 21:16:15.053284 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1005 21:16:15.083454 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 21:16:15.090387 1454285 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1005 21:16:15.090424 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1005 21:16:15.092048 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1005 21:16:15.108914 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1005 21:16:15.130436 1454285 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1005 21:16:15.130465 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1005 21:16:15.133876 1454285 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1005 21:16:15.133904 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1005 21:16:15.191161 1454285 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1005 21:16:15.191187 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1005 21:16:15.226122 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1005 21:16:15.226147 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1005 21:16:15.232610 1454285 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1005 21:16:15.232636 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1005 21:16:15.297625 1454285 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 21:16:15.297654 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1005 21:16:15.337848 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1005 21:16:15.365964 1454285 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1005 21:16:15.365990 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1005 21:16:15.413308 1454285 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:16:15.413349 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1005 21:16:15.417584 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1005 21:16:15.417616 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1005 21:16:15.504618 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1005 21:16:15.556126 1454285 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1005 21:16:15.556153 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1005 21:16:15.624727 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:16:15.642202 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1005 21:16:15.642226 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1005 21:16:15.704621 1454285 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1005 21:16:15.704646 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1005 21:16:15.787912 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1005 21:16:15.787936 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1005 21:16:15.832099 1454285 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1005 21:16:15.832122 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1005 21:16:15.937622 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1005 21:16:15.937648 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1005 21:16:15.967431 1454285 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1005 21:16:15.967454 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1005 21:16:16.039009 1454285 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 21:16:16.039033 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1005 21:16:16.081040 1454285 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1005 21:16:16.081071 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1005 21:16:16.185684 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1005 21:16:16.212883 1454285 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1005 21:16:16.212907 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1005 21:16:16.399834 1454285 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 21:16:16.399861 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1005 21:16:16.608185 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1005 21:16:16.763985 1454285 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.517200478s)
	I1005 21:16:16.764010 1454285 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1005 21:16:16.764044 1454285 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.378383378s)
	I1005 21:16:16.764817 1454285 node_ready.go:35] waiting up to 6m0s for node "addons-792068" to be "Ready" ...
	I1005 21:16:18.502966 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.614764777s)
	I1005 21:16:19.180092 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:19.329302 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.364008648s)
	I1005 21:16:19.329385 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.362939901s)
	I1005 21:16:19.329433 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.245955765s)
	I1005 21:16:19.333715 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.241606858s)
	W1005 21:16:19.371886 1454285 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1005 21:16:19.899275 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.790311031s)
	I1005 21:16:19.899426 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.394781329s)
	I1005 21:16:19.899433 1454285 addons.go:467] Verifying addon ingress=true in "addons-792068"
	I1005 21:16:19.899444 1454285 addons.go:467] Verifying addon metrics-server=true in "addons-792068"
	I1005 21:16:19.901834 1454285 out.go:177] * Verifying ingress addon...
	I1005 21:16:19.899548 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.274784387s)
	I1005 21:16:19.899601 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.71388755s)
	I1005 21:16:19.899351 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.561465974s)
	I1005 21:16:19.903796 1454285 addons.go:467] Verifying addon registry=true in "addons-792068"
	I1005 21:16:19.906255 1454285 out.go:177] * Verifying registry addon...
	W1005 21:16:19.903880 1454285 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 21:16:19.905010 1454285 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1005 21:16:19.908782 1454285 retry.go:31] will retry after 278.179411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1005 21:16:19.909583 1454285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1005 21:16:19.925193 1454285 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1005 21:16:19.925227 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:19.933826 1454285 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1005 21:16:19.933858 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:19.949166 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:19.963058 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:20.187716 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1005 21:16:20.293125 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.684870466s)
	I1005 21:16:20.293169 1454285 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-792068"
	I1005 21:16:20.296492 1454285 out.go:177] * Verifying csi-hostpath-driver addon...
	I1005 21:16:20.299357 1454285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1005 21:16:20.321246 1454285 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1005 21:16:20.321279 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:20.340222 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:20.460027 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:20.475348 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:20.874900 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:20.973454 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:21.015107 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:21.346992 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:21.455111 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:21.467707 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:21.602953 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:21.841158 1454285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.653390357s)
	I1005 21:16:21.861157 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:21.957862 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:21.973907 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:22.346716 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:22.454055 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:22.467401 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:22.569764 1454285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1005 21:16:22.569856 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:22.601531 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:22.807247 1454285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1005 21:16:22.844484 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:22.864610 1454285 addons.go:231] Setting addon gcp-auth=true in "addons-792068"
	I1005 21:16:22.864671 1454285 host.go:66] Checking if "addons-792068" exists ...
	I1005 21:16:22.865160 1454285 cli_runner.go:164] Run: docker container inspect addons-792068 --format={{.State.Status}}
	I1005 21:16:22.895077 1454285 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1005 21:16:22.895133 1454285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-792068
	I1005 21:16:22.914745 1454285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/addons-792068/id_rsa Username:docker}
	I1005 21:16:22.955564 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:22.967603 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:23.034586 1454285 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1005 21:16:23.036797 1454285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1005 21:16:23.038580 1454285 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1005 21:16:23.038643 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1005 21:16:23.067367 1454285 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1005 21:16:23.067389 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1005 21:16:23.106194 1454285 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 21:16:23.106214 1454285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1005 21:16:23.130982 1454285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1005 21:16:23.345983 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:23.454916 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:23.468495 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:23.860814 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:23.943967 1454285 addons.go:467] Verifying addon gcp-auth=true in "addons-792068"
	I1005 21:16:23.946216 1454285 out.go:177] * Verifying gcp-auth addon...
	I1005 21:16:23.949106 1454285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1005 21:16:23.989201 1454285 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1005 21:16:23.989261 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:23.995578 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:23.997729 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:23.999120 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:24.102236 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:24.345617 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:24.455048 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:24.468396 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:24.502696 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:24.845324 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:24.955110 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:24.967285 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:25.002643 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:25.346844 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:25.455498 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:25.468331 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:25.502209 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:25.845043 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:25.955182 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:25.968775 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:26.006468 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:26.345793 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:26.454843 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:26.468900 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:26.502345 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:26.603008 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:26.847363 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:26.954634 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:26.968342 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:27.021221 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:27.345309 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:27.453690 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:27.467883 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:27.501558 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:27.845248 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:27.953505 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:27.967586 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:28.008185 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:28.344538 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:28.453667 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:28.467885 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:28.502245 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:28.844924 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:28.954092 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:28.967672 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:29.001479 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:29.102289 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:29.345130 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:29.454469 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:29.467454 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:29.501808 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:29.845627 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:29.954990 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:29.967800 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:30.022443 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:30.346458 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:30.454571 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:30.467461 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:30.501756 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:30.844874 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:30.954502 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:30.968860 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:31.007688 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:31.102384 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:31.345015 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:31.453643 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:31.467617 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:31.502219 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:31.845354 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:31.954169 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:31.971634 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:32.005644 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:32.344584 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:32.455023 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:32.467386 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:32.501708 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:32.845122 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:32.953912 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:32.968530 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:33.005199 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:33.102651 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:33.345314 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:33.454096 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:33.467614 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:33.502010 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:33.844937 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:33.954146 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:33.967520 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:34.002813 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:34.345187 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:34.453908 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:34.468155 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:34.501591 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:34.844976 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:34.956562 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:34.967665 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:35.006908 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:35.345566 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:35.453504 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:35.467826 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:35.501987 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:35.602652 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:35.844800 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:35.953509 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:35.967943 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:36.009117 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:36.344999 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:36.454173 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:36.467595 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:36.502297 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:36.845463 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:36.954602 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:36.968239 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:37.006261 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:37.345777 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:37.453889 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:37.468688 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:37.503429 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:37.845133 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:37.953171 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:37.967158 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:38.002522 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:38.102251 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:38.345088 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:38.453449 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:38.467900 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:38.502247 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:38.845721 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:38.953961 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:38.967234 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:39.007904 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:39.348061 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:39.453482 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:39.467460 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:39.501722 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:39.845240 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:39.954737 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:39.967485 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:40.026474 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:40.102314 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:40.346405 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:40.454326 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:40.469048 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:40.501279 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:40.844698 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:40.953482 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:40.967618 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:41.015145 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:41.344513 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:41.453873 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:41.467977 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:41.502091 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:41.844347 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:41.953811 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:41.968673 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:42.015606 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:42.103112 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:42.351666 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:42.453403 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:42.467748 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:42.502169 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:42.846828 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:42.954286 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:42.967388 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:43.007248 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:43.345568 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:43.453930 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:43.468055 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:43.502310 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:43.844704 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:43.953612 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:43.967479 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:44.002724 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:44.345701 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:44.453980 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:44.468369 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:44.501767 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:44.602353 1454285 node_ready.go:58] node "addons-792068" has status "Ready":"False"
	I1005 21:16:44.845591 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:44.956357 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:44.967522 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:45.034373 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:45.350073 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:45.456729 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:45.468194 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:45.501509 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:45.844990 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:45.954380 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:45.967401 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:46.005220 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:46.345175 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:46.495111 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:46.500935 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:46.518697 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:46.605322 1454285 node_ready.go:49] node "addons-792068" has status "Ready":"True"
	I1005 21:16:46.605369 1454285 node_ready.go:38] duration metric: took 29.840521379s waiting for node "addons-792068" to be "Ready" ...
	I1005 21:16:46.605382 1454285 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:16:46.617793 1454285 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b7cdb" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:46.855178 1454285 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1005 21:16:46.855209 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:46.975090 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:47.052302 1454285 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1005 21:16:47.052329 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:47.055330 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:47.348656 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:47.454681 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:47.469824 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:47.504581 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:47.847598 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:47.954933 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:47.968452 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:48.014809 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:48.346798 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:48.454651 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:48.469599 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:48.502595 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:48.647306 1454285 pod_ready.go:92] pod "coredns-5dd5756b68-b7cdb" in "kube-system" namespace has status "Ready":"True"
	I1005 21:16:48.647333 1454285 pod_ready.go:81] duration metric: took 2.029506808s waiting for pod "coredns-5dd5756b68-b7cdb" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.647358 1454285 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.669835 1454285 pod_ready.go:92] pod "etcd-addons-792068" in "kube-system" namespace has status "Ready":"True"
	I1005 21:16:48.669910 1454285 pod_ready.go:81] duration metric: took 22.543315ms waiting for pod "etcd-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.669939 1454285 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.696108 1454285 pod_ready.go:92] pod "kube-apiserver-addons-792068" in "kube-system" namespace has status "Ready":"True"
	I1005 21:16:48.696176 1454285 pod_ready.go:81] duration metric: took 26.215541ms waiting for pod "kube-apiserver-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.696204 1454285 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.721149 1454285 pod_ready.go:92] pod "kube-controller-manager-addons-792068" in "kube-system" namespace has status "Ready":"True"
	I1005 21:16:48.721176 1454285 pod_ready.go:81] duration metric: took 24.949674ms waiting for pod "kube-controller-manager-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.721191 1454285 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-542fv" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:48.847269 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:48.954131 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:48.968832 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:49.002354 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:49.003907 1454285 pod_ready.go:92] pod "kube-proxy-542fv" in "kube-system" namespace has status "Ready":"True"
	I1005 21:16:49.003931 1454285 pod_ready.go:81] duration metric: took 282.732734ms waiting for pod "kube-proxy-542fv" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:49.003943 1454285 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:49.350103 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:49.403111 1454285 pod_ready.go:92] pod "kube-scheduler-addons-792068" in "kube-system" namespace has status "Ready":"True"
	I1005 21:16:49.403140 1454285 pod_ready.go:81] duration metric: took 399.188538ms waiting for pod "kube-scheduler-addons-792068" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:49.403154 1454285 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace to be "Ready" ...
	I1005 21:16:49.453651 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:49.468184 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:49.501812 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:49.846055 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:49.954482 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:49.970518 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:50.023153 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:50.346115 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:50.454475 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:50.468338 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:50.502274 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:50.845764 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:50.955019 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:50.967538 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:51.008458 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:51.355311 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:51.454588 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:51.468621 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:51.501834 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:51.710686 1454285 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"False"
	I1005 21:16:51.846423 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:51.954618 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:51.968389 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:52.012772 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:52.346996 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:52.454523 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:52.468356 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:52.502556 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:52.847106 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:52.953690 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:52.969184 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:53.019463 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:53.346775 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:53.455468 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:53.471656 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:53.503313 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:53.713383 1454285 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"False"
	I1005 21:16:53.848426 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:53.954489 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:53.968957 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:54.018896 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:54.352563 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:54.455095 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:54.475211 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:54.503114 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:54.846680 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:54.958443 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:54.972765 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:55.031180 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:55.372941 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:55.470235 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:55.500693 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:55.515169 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:55.730319 1454285 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"False"
	I1005 21:16:55.848244 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:55.954788 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:55.968794 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:56.003779 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:56.351611 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:56.456282 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:56.472195 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:56.509961 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:56.849049 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:56.956324 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:56.968627 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:57.005259 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:57.349778 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:57.495652 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:57.496845 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:57.525632 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:57.849079 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:57.955086 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:57.968779 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:58.003302 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:58.211243 1454285 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"False"
	I1005 21:16:58.350029 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:58.455031 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:58.468680 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:58.504093 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:58.848100 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:58.954632 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:58.967860 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:59.004114 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:59.346926 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:59.459065 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:59.471085 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:16:59.501709 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:16:59.846251 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:16:59.960720 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:16:59.972341 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:00.006188 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:00.223239 1454285 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"False"
	I1005 21:17:00.388535 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:00.464925 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:00.478963 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:00.502525 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:00.847658 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:00.954890 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:00.969193 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:01.011967 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:01.346872 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:01.460144 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:01.472429 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:01.505823 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:01.848441 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:01.955246 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:01.974854 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:02.004485 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:02.346719 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:02.455411 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:02.470824 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:02.505274 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:02.727926 1454285 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"False"
	I1005 21:17:02.848402 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:02.958246 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:02.975167 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:03.005268 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:03.347123 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:03.454233 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:03.468590 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:03.502659 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:03.849812 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:03.954542 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:03.968891 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:04.016056 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:04.351565 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:04.455158 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:04.473198 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:04.503066 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:04.849054 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:04.956948 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:04.970895 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:05.006182 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:05.216410 1454285 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"False"
	I1005 21:17:05.346768 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:05.455004 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:05.475149 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:05.507216 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:05.846640 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:05.954501 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:05.968804 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:06.009557 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:06.346806 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:06.454012 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:06.471218 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:06.502291 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:06.848277 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:06.959157 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:06.972042 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:07.027474 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:07.215355 1454285 pod_ready.go:92] pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace has status "Ready":"True"
	I1005 21:17:07.215490 1454285 pod_ready.go:81] duration metric: took 17.812272637s waiting for pod "metrics-server-7c66d45ddc-xlt65" in "kube-system" namespace to be "Ready" ...
	I1005 21:17:07.215558 1454285 pod_ready.go:38] duration metric: took 20.6101592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:17:07.215610 1454285 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:17:07.215727 1454285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:17:07.250909 1454285 api_server.go:72] duration metric: took 52.868944444s to wait for apiserver process to appear ...
	I1005 21:17:07.250985 1454285 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:17:07.251021 1454285 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1005 21:17:07.262344 1454285 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1005 21:17:07.263850 1454285 api_server.go:141] control plane version: v1.28.2
	I1005 21:17:07.263885 1454285 api_server.go:131] duration metric: took 12.880268ms to wait for apiserver health ...
	I1005 21:17:07.263895 1454285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:17:07.285252 1454285 system_pods.go:59] 17 kube-system pods found
	I1005 21:17:07.285377 1454285 system_pods.go:61] "coredns-5dd5756b68-b7cdb" [07bc030d-4716-4311-a482-aebd0cb912c6] Running
	I1005 21:17:07.285419 1454285 system_pods.go:61] "csi-hostpath-attacher-0" [b742ea2c-41cf-408d-a59d-2eee587e3c4b] Running
	I1005 21:17:07.285448 1454285 system_pods.go:61] "csi-hostpath-resizer-0" [db2a9319-d1e6-48c7-a724-65f6e28ba2e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 21:17:07.285477 1454285 system_pods.go:61] "csi-hostpathplugin-qbzsz" [1311bb1a-e7d4-4f7b-82c6-7e9c0d0ae69b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 21:17:07.285517 1454285 system_pods.go:61] "etcd-addons-792068" [4befeae1-5ab4-4984-b584-1be9bdbc6f97] Running
	I1005 21:17:07.285542 1454285 system_pods.go:61] "kindnet-kvhr4" [5ff6c3d7-3128-4808-a321-3a60479045bd] Running
	I1005 21:17:07.285565 1454285 system_pods.go:61] "kube-apiserver-addons-792068" [1109181a-858c-4ced-9615-2c88d0bf7d9e] Running
	I1005 21:17:07.285598 1454285 system_pods.go:61] "kube-controller-manager-addons-792068" [89669411-885f-4ba0-accf-747d20d74e1a] Running
	I1005 21:17:07.285629 1454285 system_pods.go:61] "kube-ingress-dns-minikube" [48f7f163-debd-4bd8-87ce-79377bf5170c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 21:17:07.285655 1454285 system_pods.go:61] "kube-proxy-542fv" [7f6019f2-e33b-4f90-a8bb-9c2b48558322] Running
	I1005 21:17:07.285675 1454285 system_pods.go:61] "kube-scheduler-addons-792068" [41321946-5972-4307-b4a4-ace4d8ebca20] Running
	I1005 21:17:07.285695 1454285 system_pods.go:61] "metrics-server-7c66d45ddc-xlt65" [f7b11110-7006-4910-b415-20dd0a6b4c4e] Running
	I1005 21:17:07.285731 1454285 system_pods.go:61] "registry-proxy-vz2q7" [ab807e06-35f3-4c48-9558-132b23eea60e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 21:17:07.285759 1454285 system_pods.go:61] "registry-vpzch" [36e6cf6c-82e4-440d-b58b-99058332f62a] Running
	I1005 21:17:07.285781 1454285 system_pods.go:61] "snapshot-controller-58dbcc7b99-5w5g6" [b98c95b0-ea9b-4a64-8834-16e5785abf2c] Running
	I1005 21:17:07.285808 1454285 system_pods.go:61] "snapshot-controller-58dbcc7b99-bg4s7" [4795be80-78cc-4d5f-b8b3-e1f170bb1d64] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:17:07.285840 1454285 system_pods.go:61] "storage-provisioner" [e6a042cc-0426-45a0-b942-93c6d0333bf9] Running
	I1005 21:17:07.285869 1454285 system_pods.go:74] duration metric: took 21.967867ms to wait for pod list to return data ...
	I1005 21:17:07.285891 1454285 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:17:07.288861 1454285 default_sa.go:45] found service account: "default"
	I1005 21:17:07.288884 1454285 default_sa.go:55] duration metric: took 2.970755ms for default service account to be created ...
	I1005 21:17:07.288895 1454285 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:17:07.302847 1454285 system_pods.go:86] 17 kube-system pods found
	I1005 21:17:07.302936 1454285 system_pods.go:89] "coredns-5dd5756b68-b7cdb" [07bc030d-4716-4311-a482-aebd0cb912c6] Running
	I1005 21:17:07.302961 1454285 system_pods.go:89] "csi-hostpath-attacher-0" [b742ea2c-41cf-408d-a59d-2eee587e3c4b] Running
	I1005 21:17:07.303005 1454285 system_pods.go:89] "csi-hostpath-resizer-0" [db2a9319-d1e6-48c7-a724-65f6e28ba2e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1005 21:17:07.303039 1454285 system_pods.go:89] "csi-hostpathplugin-qbzsz" [1311bb1a-e7d4-4f7b-82c6-7e9c0d0ae69b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1005 21:17:07.303077 1454285 system_pods.go:89] "etcd-addons-792068" [4befeae1-5ab4-4984-b584-1be9bdbc6f97] Running
	I1005 21:17:07.303098 1454285 system_pods.go:89] "kindnet-kvhr4" [5ff6c3d7-3128-4808-a321-3a60479045bd] Running
	I1005 21:17:07.303120 1454285 system_pods.go:89] "kube-apiserver-addons-792068" [1109181a-858c-4ced-9615-2c88d0bf7d9e] Running
	I1005 21:17:07.303157 1454285 system_pods.go:89] "kube-controller-manager-addons-792068" [89669411-885f-4ba0-accf-747d20d74e1a] Running
	I1005 21:17:07.303186 1454285 system_pods.go:89] "kube-ingress-dns-minikube" [48f7f163-debd-4bd8-87ce-79377bf5170c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1005 21:17:07.303213 1454285 system_pods.go:89] "kube-proxy-542fv" [7f6019f2-e33b-4f90-a8bb-9c2b48558322] Running
	I1005 21:17:07.303250 1454285 system_pods.go:89] "kube-scheduler-addons-792068" [41321946-5972-4307-b4a4-ace4d8ebca20] Running
	I1005 21:17:07.303278 1454285 system_pods.go:89] "metrics-server-7c66d45ddc-xlt65" [f7b11110-7006-4910-b415-20dd0a6b4c4e] Running
	I1005 21:17:07.303323 1454285 system_pods.go:89] "registry-proxy-vz2q7" [ab807e06-35f3-4c48-9558-132b23eea60e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1005 21:17:07.303349 1454285 system_pods.go:89] "registry-vpzch" [36e6cf6c-82e4-440d-b58b-99058332f62a] Running
	I1005 21:17:07.303371 1454285 system_pods.go:89] "snapshot-controller-58dbcc7b99-5w5g6" [b98c95b0-ea9b-4a64-8834-16e5785abf2c] Running
	I1005 21:17:07.303412 1454285 system_pods.go:89] "snapshot-controller-58dbcc7b99-bg4s7" [4795be80-78cc-4d5f-b8b3-e1f170bb1d64] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1005 21:17:07.303440 1454285 system_pods.go:89] "storage-provisioner" [e6a042cc-0426-45a0-b942-93c6d0333bf9] Running
	I1005 21:17:07.303480 1454285 system_pods.go:126] duration metric: took 14.563487ms to wait for k8s-apps to be running ...
	I1005 21:17:07.303508 1454285 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:17:07.303607 1454285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:17:07.328829 1454285 system_svc.go:56] duration metric: took 25.313046ms WaitForService to wait for kubelet.
	I1005 21:17:07.328910 1454285 kubeadm.go:581] duration metric: took 52.946951292s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:17:07.328948 1454285 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:17:07.333782 1454285 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:17:07.333866 1454285 node_conditions.go:123] node cpu capacity is 2
	I1005 21:17:07.333895 1454285 node_conditions.go:105] duration metric: took 4.910597ms to run NodePressure ...
	I1005 21:17:07.333937 1454285 start.go:228] waiting for startup goroutines ...
	I1005 21:17:07.348327 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:07.460620 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:07.474129 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:07.506738 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:07.852035 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:07.955098 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:07.968342 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:08.015326 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:08.346826 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:08.454390 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:08.468249 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:08.502649 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:08.846102 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:08.953674 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:08.968502 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:09.004245 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:09.346288 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:09.454362 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:09.477632 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:09.502985 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:09.857534 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:09.957763 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:09.969591 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:10.006795 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:10.346529 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:10.454216 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:10.468121 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:10.502000 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:10.846456 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:10.953747 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:10.979554 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:11.012031 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:11.346225 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:11.454528 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:11.468162 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:11.502930 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:11.847041 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:11.953549 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:11.968209 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:12.015784 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:12.351678 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:12.455107 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:12.469325 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:12.502458 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:12.847755 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:12.954640 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:12.968893 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:13.003305 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:13.346231 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:13.455057 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:13.468092 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:13.502063 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:13.848386 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:13.958148 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:13.968794 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:14.002355 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:14.348056 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:14.456439 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:14.471706 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:14.502139 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:14.847333 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:14.956512 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:14.970814 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:15.002961 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:15.350195 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:15.454521 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:15.470266 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:15.524021 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:15.846461 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:15.955214 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:15.968214 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1005 21:17:16.005683 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:16.345942 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:16.453909 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:16.468397 1454285 kapi.go:107] duration metric: took 56.558809463s to wait for kubernetes.io/minikube-addons=registry ...
	I1005 21:17:16.502364 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:16.847205 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:16.954266 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:17.002520 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:17.348481 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:17.455303 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:17.502397 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:17.847750 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:17.954576 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:18.007440 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:18.346797 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:18.454073 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:18.503006 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:18.846931 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:18.955071 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:19.015064 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:19.346973 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:19.454910 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:19.517409 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:19.846102 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:19.954997 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:20.021229 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:20.346808 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:20.456841 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:20.502077 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:20.850573 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:20.953723 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:21.008577 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:21.346274 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:21.454993 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:21.506622 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:21.850648 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:21.954159 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:22.018306 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:22.346571 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:22.454386 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:22.509049 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:22.849274 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:22.954844 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:23.002762 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:23.349293 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:23.455722 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:23.503306 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:23.846717 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:23.953632 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:24.013057 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:24.346538 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:24.454683 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:24.509353 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:24.846658 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:24.953990 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:25.004164 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:25.347607 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:25.456223 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:25.501929 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:25.846747 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:25.961179 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:26.021755 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:26.347587 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:26.454177 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:26.502117 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:26.846373 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:26.954448 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:27.008015 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:27.346250 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:27.454595 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:27.502812 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:27.846296 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:27.955124 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:28.003305 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:28.354830 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:28.455075 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:28.501779 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:28.846795 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:28.953944 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:29.003319 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:29.346048 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:29.454769 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:29.502710 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:29.860395 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:29.958130 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:30.056369 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:30.347064 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:30.455122 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:30.503019 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:30.847048 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:30.955581 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:31.007644 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:31.346065 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:31.454558 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:31.504112 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:31.847059 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:31.954279 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:32.011307 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:32.357168 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:32.454687 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:32.501283 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:32.846146 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:32.954290 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:33.021402 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:33.353021 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:33.455794 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:33.502480 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:33.846803 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:33.955084 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:34.016312 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:34.348956 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:34.479543 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:34.508442 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:34.854458 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:34.955368 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:35.003064 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:35.346724 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:35.455292 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:35.502388 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:35.851923 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:35.955463 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:36.022051 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:36.346801 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:36.456003 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:36.502461 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:36.846662 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:36.955238 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:37.013077 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:37.350522 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:37.465582 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:37.509325 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:37.850314 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:37.955629 1454285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1005 21:17:38.012585 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:38.350406 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:38.455684 1454285 kapi.go:107] duration metric: took 1m18.550670238s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1005 21:17:38.502637 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:38.845900 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:39.011320 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:39.346602 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:39.501818 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:39.848800 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:40.024289 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:40.347221 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:40.503609 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:40.868771 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:41.006805 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:41.347490 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:41.504249 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:41.846631 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:42.008298 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:42.347199 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:42.502550 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:42.852336 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:43.003251 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:43.347818 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:43.502112 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:43.846649 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:44.005030 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:44.347458 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:44.504420 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:44.846511 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:45.043572 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:45.357091 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:45.503793 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:45.845808 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:46.003452 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:46.350755 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:46.501811 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:46.848196 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:47.007067 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:47.347188 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:47.501945 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:47.846295 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1005 21:17:48.008777 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:48.346208 1454285 kapi.go:107] duration metric: took 1m28.04684962s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1005 21:17:48.501394 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:49.003512 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:49.506589 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:50.019732 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:50.501432 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:51.003455 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:51.501584 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:52.005729 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:52.502285 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:53.014755 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:53.502161 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:54.008747 1454285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1005 21:17:54.502345 1454285 kapi.go:107] duration metric: took 1m30.553237706s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1005 21:17:54.504316 1454285 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-792068 cluster.
	I1005 21:17:54.506623 1454285 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1005 21:17:54.508226 1454285 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1005 21:17:54.510385 1454285 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, metrics-server, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1005 21:17:54.512226 1454285 addons.go:502] enable addons completed in 1m40.59862153s: enabled=[cloud-spanner storage-provisioner ingress-dns default-storageclass metrics-server inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1005 21:17:54.512270 1454285 start.go:233] waiting for cluster config update ...
	I1005 21:17:54.512287 1454285 start.go:242] writing updated cluster config ...
	I1005 21:17:54.512589 1454285 ssh_runner.go:195] Run: rm -f paused
	I1005 21:17:54.575834 1454285 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 21:17:54.577894 1454285 out.go:177] * Done! kubectl is now configured to use "addons-792068" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.800620525Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=9bf4157a-4b23-49e5-9037-57d78fcb9921 name=/runtime.v1.ImageService/ImageStatus
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.802098838Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=406204ae-0321-4ff2-92f3-1e531bf2ad6c name=/runtime.v1.ImageService/ImageStatus
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.802325939Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=406204ae-0321-4ff2-92f3-1e531bf2ad6c name=/runtime.v1.ImageService/ImageStatus
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.803329842Z" level=info msg="Creating container: default/hello-world-app-5d77478584-bfjfd/hello-world-app" id=8d8df171-3b55-4dcc-8ae8-0258caa72170 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.803440110Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.890146138Z" level=info msg="Created container a79eacb7704eaf5e01bc04b7132a9dd5cb6bca74e9f7e04133c65679bbdc02ce: default/hello-world-app-5d77478584-bfjfd/hello-world-app" id=8d8df171-3b55-4dcc-8ae8-0258caa72170 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.891109934Z" level=info msg="Starting container: a79eacb7704eaf5e01bc04b7132a9dd5cb6bca74e9f7e04133c65679bbdc02ce" id=e9755d9b-9e78-4914-b394-89f6575eaef3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:21:23 addons-792068 conmon[8217]: conmon a79eacb7704eaf5e01bc <ninfo>: container 8228 exited with status 1
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.905982900Z" level=info msg="Started container" PID=8228 containerID=a79eacb7704eaf5e01bc04b7132a9dd5cb6bca74e9f7e04133c65679bbdc02ce description=default/hello-world-app-5d77478584-bfjfd/hello-world-app id=e9755d9b-9e78-4914-b394-89f6575eaef3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d41c1e657919b25b6e4d3889d0de042d0b2ed41514fcac4f6d772e6653b05fe5
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.959518293Z" level=info msg="Removing container: 5c494f2b3f31b4080a57053b4a0436d5fd83a3d8bb542a1e4462439ce3a74fea" id=8579ca99-7bca-4f90-8e07-633a9c2ad85e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 05 21:21:23 addons-792068 crio[888]: time="2023-10-05 21:21:23.987446794Z" level=info msg="Removed container 5c494f2b3f31b4080a57053b4a0436d5fd83a3d8bb542a1e4462439ce3a74fea: default/hello-world-app-5d77478584-bfjfd/hello-world-app" id=8579ca99-7bca-4f90-8e07-633a9c2ad85e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.693049117Z" level=warning msg="Stopping container ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=2eb31471-cb00-4bda-b3e0-19d73e3d53c7 name=/runtime.v1.RuntimeService/StopContainer
	Oct 05 21:21:24 addons-792068 conmon[4660]: conmon ad5d348f8aca2b2473b0 <ninfo>: container 4671 exited with status 137
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.855075631Z" level=info msg="Stopped container ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6: ingress-nginx/ingress-nginx-controller-5c4c674fdc-5gzqc/controller" id=2eb31471-cb00-4bda-b3e0-19d73e3d53c7 name=/runtime.v1.RuntimeService/StopContainer
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.855988195Z" level=info msg="Stopping pod sandbox: 14ced4be6b7a784099c1592f9c91a48acbcee525cde2dc15d1eb4608fb3efb7a" id=75ef8858-b606-45df-b599-a0e65f5af99e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.859826551Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-JJANO4ASUTWTWIGP - [0:0]\n:KUBE-HP-GJCBXSF5H2FKNVAW - [0:0]\n-X KUBE-HP-GJCBXSF5H2FKNVAW\n-X KUBE-HP-JJANO4ASUTWTWIGP\nCOMMIT\n"
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.861501032Z" level=info msg="Closing host port tcp:80"
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.861557540Z" level=info msg="Closing host port tcp:443"
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.863218656Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.863251583Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.863438741Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5c4c674fdc-5gzqc Namespace:ingress-nginx ID:14ced4be6b7a784099c1592f9c91a48acbcee525cde2dc15d1eb4608fb3efb7a UID:48e9336a-a7ca-4ee7-ab6f-e739b0be0a32 NetNS:/var/run/netns/3ddb49fd-1927-4288-8a0d-5e945a56a0bd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.863583495Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5c4c674fdc-5gzqc from CNI network \"kindnet\" (type=ptp)"
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.884372571Z" level=info msg="Stopped pod sandbox: 14ced4be6b7a784099c1592f9c91a48acbcee525cde2dc15d1eb4608fb3efb7a" id=75ef8858-b606-45df-b599-a0e65f5af99e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.963152608Z" level=info msg="Removing container: ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6" id=9e1f8052-7af3-4f97-921c-ae9c48526da5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 05 21:21:24 addons-792068 crio[888]: time="2023-10-05 21:21:24.984205002Z" level=info msg="Removed container ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6: ingress-nginx/ingress-nginx-controller-5c4c674fdc-5gzqc/controller" id=9e1f8052-7af3-4f97-921c-ae9c48526da5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a79eacb7704ea       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                             6 seconds ago       Exited              hello-world-app           2                   d41c1e657919b       hello-world-app-5d77478584-bfjfd
	9f29214cd5589       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                              2 minutes ago       Running             nginx                     0                   3376b8d636880       nginx
	bf63bb1179eba       ghcr.io/headlamp-k8s/headlamp@sha256:44b17c125fc5da7899f2583ca3468a31cc80ea52c9ef2aad503f58d91908e4c1                        3 minutes ago       Running             headlamp                  0                   f554d249125b1       headlamp-58b88cff49-fqfmq
	ac8118de41e15       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   d0ff28af29aa1       gcp-auth-d4c87556c-k7c8d
	eb2514ef13d55       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago       Running             local-path-provisioner    0                   3d6792b6812a6       local-path-provisioner-78b46b4d5c-6tx2f
	c51e29df56c1c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago       Exited              patch                     0                   b821e7747924a       ingress-nginx-admission-patch-cq75s
	c2c08a0a3f5fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago       Exited              create                    0                   3e77669548201       ingress-nginx-admission-create-xnwt6
	3a6d0571b137a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   b0058f1e85e5d       coredns-5dd5756b68-b7cdb
	505f0e88dd644       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   c2d45a7b01247       storage-provisioner
	855c1139a2f2f       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   322422b03345b       kindnet-kvhr4
	45f10f4a9f04b       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                                             5 minutes ago       Running             kube-proxy                0                   f897fbc32b6da       kube-proxy-542fv
	dfda9ebf5a278       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                                             5 minutes ago       Running             kube-scheduler            0                   561722ad6bbe8       kube-scheduler-addons-792068
	ed8d1529320d2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   3afe85c9b04b6       etcd-addons-792068
	860de4036972a       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                                             5 minutes ago       Running             kube-controller-manager   0                   5235374a74682       kube-controller-manager-addons-792068
	41bc34e902579       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                                             5 minutes ago       Running             kube-apiserver            0                   fcbf335c37423       kube-apiserver-addons-792068
	
	* 
	* ==> coredns [3a6d0571b137a5968de0202d61dffa66ba76cd140a67e93f81854ba868d7411e] <==
	* [INFO] 10.244.0.17:50712 - 36734 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044177s
	[INFO] 10.244.0.17:50712 - 12239 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003330829s
	[INFO] 10.244.0.17:52978 - 32121 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004735822s
	[INFO] 10.244.0.17:52978 - 35166 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002129306s
	[INFO] 10.244.0.17:52978 - 33381 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000282239s
	[INFO] 10.244.0.17:50712 - 64372 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002695097s
	[INFO] 10.244.0.17:50712 - 29541 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096763s
	[INFO] 10.244.0.17:43034 - 9295 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000146371s
	[INFO] 10.244.0.17:50208 - 36652 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098544s
	[INFO] 10.244.0.17:43034 - 51548 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088115s
	[INFO] 10.244.0.17:50208 - 62908 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006638s
	[INFO] 10.244.0.17:43034 - 50956 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075774s
	[INFO] 10.244.0.17:50208 - 8785 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054655s
	[INFO] 10.244.0.17:43034 - 41459 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068832s
	[INFO] 10.244.0.17:50208 - 56772 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056369s
	[INFO] 10.244.0.17:43034 - 22551 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083987s
	[INFO] 10.244.0.17:50208 - 48051 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038334s
	[INFO] 10.244.0.17:43034 - 20886 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047433s
	[INFO] 10.244.0.17:50208 - 13173 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043052s
	[INFO] 10.244.0.17:43034 - 30528 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001636459s
	[INFO] 10.244.0.17:50208 - 36828 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002884348s
	[INFO] 10.244.0.17:43034 - 29911 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001292492s
	[INFO] 10.244.0.17:50208 - 1555 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001431709s
	[INFO] 10.244.0.17:43034 - 63585 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006866s
	[INFO] 10.244.0.17:50208 - 58365 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073525s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-792068
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-792068
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=addons-792068
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_16_02_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-792068
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:15:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-792068
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:21:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:19:05 +0000   Thu, 05 Oct 2023 21:15:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:19:05 +0000   Thu, 05 Oct 2023 21:15:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:19:05 +0000   Thu, 05 Oct 2023 21:15:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:19:05 +0000   Thu, 05 Oct 2023 21:16:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-792068
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee9cbdf0928c4937a77f30b73b10522e
	  System UUID:                65d27869-c4b1-4987-a064-1c8f6d7a5624
	  Boot ID:                    619e9679-c801-4966-a4f0-8d68f85af04f
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-bfjfd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  gcp-auth                    gcp-auth-d4c87556c-k7c8d                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  headlamp                    headlamp-58b88cff49-fqfmq                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 coredns-5dd5756b68-b7cdb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m16s
	  kube-system                 etcd-addons-792068                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m28s
	  kube-system                 kindnet-kvhr4                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m17s
	  kube-system                 kube-apiserver-addons-792068               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-controller-manager-addons-792068      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-proxy-542fv                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-scheduler-addons-792068               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  local-path-storage          local-path-provisioner-78b46b4d5c-6tx2f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node addons-792068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node addons-792068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x8 over 5m36s)  kubelet          Node addons-792068 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m29s                  kubelet          Node addons-792068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s                  kubelet          Node addons-792068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s                  kubelet          Node addons-792068 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m17s                  node-controller  Node addons-792068 event: Registered Node addons-792068 in Controller
	  Normal  NodeReady                4m44s                  kubelet          Node addons-792068 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000773] FS-Cache: N-cookie c=00000041 [p=00000038 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=0000000086b468c8
	[  +0.001047] FS-Cache: N-key=[8] 'bbd5c90000000000'
	[  +0.003148] FS-Cache: Duplicate cookie detected
	[  +0.000704] FS-Cache: O-cookie c=0000003b [p=00000038 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000004cda713e
	[  +0.001089] FS-Cache: O-key=[8] 'bbd5c90000000000'
	[  +0.000695] FS-Cache: N-cookie c=00000042 [p=00000038 fl=2 nc=0 na=1]
	[  +0.000924] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=00000000578c11b8
	[  +0.001041] FS-Cache: N-key=[8] 'bbd5c90000000000'
	[  +2.654663] FS-Cache: Duplicate cookie detected
	[  +0.000723] FS-Cache: O-cookie c=00000039 [p=00000038 fl=226 nc=0 na=1]
	[  +0.001063] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=00000000886be195
	[  +0.001131] FS-Cache: O-key=[8] 'bad5c90000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000044 [p=00000038 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=0000000047c089d5
	[  +0.001069] FS-Cache: N-key=[8] 'bad5c90000000000'
	[  +0.327391] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=0000003e [p=00000038 fl=226 nc=0 na=1]
	[  +0.001064] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=0000000086ccfbc9
	[  +0.001067] FS-Cache: O-key=[8] 'c0d5c90000000000'
	[  +0.000754] FS-Cache: N-cookie c=00000045 [p=00000038 fl=2 nc=0 na=1]
	[  +0.001014] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=0000000086b468c8
	[  +0.001131] FS-Cache: N-key=[8] 'c0d5c90000000000'
	[Oct 5 20:17] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [ed8d1529320d2054c5863f73f859fab5c6751adff0e9cd109e030497c2e3c7f4] <==
	* {"level":"info","ts":"2023-10-05T21:15:55.513393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-05T21:15:55.513529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-05T21:15:55.513576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-05T21:15:55.513622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-05T21:15:55.513659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-05T21:15:55.513705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-05T21:15:55.513742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-05T21:15:55.519346Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:15:55.519203Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-792068 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:15:55.519523Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:15:55.523413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:15:55.523517Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:15:55.53345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-05T21:15:55.523844Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:15:55.533596Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:15:55.533657Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:15:55.522398Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:15:55.534645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2023-10-05T21:16:14.674832Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.998142ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024266902247817 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:336 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:120 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-05T21:16:14.705939Z","caller":"traceutil/trace.go:171","msg":"trace[217229432] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"251.073761ms","start":"2023-10-05T21:16:14.454849Z","end":"2023-10-05T21:16:14.705922Z","steps":["trace[217229432] 'process raft request'  (duration: 105.525108ms)","trace[217229432] 'compare'  (duration: 111.918856ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-05T21:16:14.707866Z","caller":"traceutil/trace.go:171","msg":"trace[1946410235] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"147.91886ms","start":"2023-10-05T21:16:14.55993Z","end":"2023-10-05T21:16:14.707849Z","steps":["trace[1946410235] 'process raft request'  (duration: 145.673149ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T21:16:17.622298Z","caller":"traceutil/trace.go:171","msg":"trace[736688583] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:414; }","duration":"178.88585ms","start":"2023-10-05T21:16:17.443403Z","end":"2023-10-05T21:16:17.622288Z","steps":[],"step_count":0}
	{"level":"info","ts":"2023-10-05T21:16:18.087492Z","caller":"traceutil/trace.go:171","msg":"trace[880536812] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"180.633988ms","start":"2023-10-05T21:16:17.906843Z","end":"2023-10-05T21:16:18.087477Z","steps":["trace[880536812] 'process raft request'  (duration: 180.19097ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T21:16:18.088824Z","caller":"traceutil/trace.go:171","msg":"trace[1236090018] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"171.700309ms","start":"2023-10-05T21:16:17.9067Z","end":"2023-10-05T21:16:18.0784Z","steps":["trace[1236090018] 'process raft request'  (duration: 163.264203ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-05T21:16:18.184632Z","caller":"traceutil/trace.go:171","msg":"trace[1748170063] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"163.51468ms","start":"2023-10-05T21:16:17.906561Z","end":"2023-10-05T21:16:18.070076Z","steps":["trace[1748170063] 'process raft request'  (duration: 145.413345ms)","trace[1748170063] 'compare'  (duration: 17.761799ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [ac8118de41e15ab85995a852039f869be4adbff3ddfcd2778dc6e2e79e412659] <==
	* 2023/10/05 21:17:53 GCP Auth Webhook started!
	2023/10/05 21:17:55 Ready to marshal response ...
	2023/10/05 21:17:55 Ready to write response ...
	2023/10/05 21:17:55 Ready to marshal response ...
	2023/10/05 21:17:55 Ready to write response ...
	2023/10/05 21:18:04 Ready to marshal response ...
	2023/10/05 21:18:04 Ready to write response ...
	2023/10/05 21:18:04 Ready to marshal response ...
	2023/10/05 21:18:04 Ready to write response ...
	2023/10/05 21:18:12 Ready to marshal response ...
	2023/10/05 21:18:12 Ready to write response ...
	2023/10/05 21:18:12 Ready to marshal response ...
	2023/10/05 21:18:12 Ready to write response ...
	2023/10/05 21:18:12 Ready to marshal response ...
	2023/10/05 21:18:12 Ready to write response ...
	2023/10/05 21:18:31 Ready to marshal response ...
	2023/10/05 21:18:31 Ready to write response ...
	2023/10/05 21:18:42 Ready to marshal response ...
	2023/10/05 21:18:42 Ready to write response ...
	2023/10/05 21:18:56 Ready to marshal response ...
	2023/10/05 21:18:56 Ready to write response ...
	2023/10/05 21:21:04 Ready to marshal response ...
	2023/10/05 21:21:04 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:21:30 up  7:03,  0 users,  load average: 0.25, 1.00, 1.59
	Linux addons-792068 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [855c1139a2f2f82ea257c13aa67e6053175469a29af3a2b56d8955c08c4eb9a1] <==
	* I1005 21:19:26.442208       1 main.go:227] handling current node
	I1005 21:19:36.454519       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:19:36.454543       1 main.go:227] handling current node
	I1005 21:19:46.467243       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:19:46.467271       1 main.go:227] handling current node
	I1005 21:19:56.479650       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:19:56.479679       1 main.go:227] handling current node
	I1005 21:20:06.491953       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:20:06.491984       1 main.go:227] handling current node
	I1005 21:20:16.496510       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:20:16.496557       1 main.go:227] handling current node
	I1005 21:20:26.506350       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:20:26.506454       1 main.go:227] handling current node
	I1005 21:20:36.510277       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:20:36.510310       1 main.go:227] handling current node
	I1005 21:20:46.515573       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:20:46.515603       1 main.go:227] handling current node
	I1005 21:20:56.524073       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:20:56.524101       1 main.go:227] handling current node
	I1005 21:21:06.532136       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:21:06.532213       1 main.go:227] handling current node
	I1005 21:21:16.544733       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:21:16.544762       1 main.go:227] handling current node
	I1005 21:21:26.557089       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:21:26.557121       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [41bc34e902579a5d70874b1c67a109230303ffabd373ff47f002597ec556dc0d] <==
	* I1005 21:18:41.905082       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1005 21:18:42.332780       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.46.15"}
	I1005 21:18:42.994032       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1005 21:19:07.875778       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1005 21:19:13.496113       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.497450       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:19:13.511656       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.512111       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:19:13.540960       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.541010       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:19:13.541964       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.542016       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:19:13.566209       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.570130       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:19:13.576298       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.576927       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:19:13.588872       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.588936       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1005 21:19:13.608424       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1005 21:19:13.609188       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1005 21:19:14.542916       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1005 21:19:14.608614       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1005 21:19:14.614293       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1005 21:21:04.628535       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.124.130"}
	E1005 21:21:21.748810       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [860de4036972abb8804325ecd546e2f25492e28f2721d02cd6a72ad87844f36b] <==
	* W1005 21:20:27.345400       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:20:27.345440       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:20:34.473422       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:20:34.473452       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:20:57.199296       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:20:57.199329       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1005 21:21:04.370073       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1005 21:21:04.422704       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-bfjfd"
	I1005 21:21:04.436276       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.973156ms"
	I1005 21:21:04.445008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="8.680746ms"
	I1005 21:21:04.445147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="34.002µs"
	I1005 21:21:04.455481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="79.237µs"
	I1005 21:21:07.952458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="180.89µs"
	I1005 21:21:08.945076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.28µs"
	I1005 21:21:09.935564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.951µs"
	W1005 21:21:13.685999       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:21:13.686044       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:21:15.481507       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:21:15.481544       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1005 21:21:15.561838       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1005 21:21:15.561874       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1005 21:21:21.661910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="6.654µs"
	I1005 21:21:21.663804       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1005 21:21:21.670485       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1005 21:21:23.973234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="183.769µs"
	
	* 
	* ==> kube-proxy [45f10f4a9f04b792903967bf739343a84e2f807da94c54718b4fa1a32446a305] <==
	* I1005 21:16:19.369464       1 server_others.go:69] "Using iptables proxy"
	I1005 21:16:19.408820       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1005 21:16:19.572215       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:16:19.575131       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:16:19.575445       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:16:19.575456       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:16:19.575540       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:16:19.575922       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:16:19.575961       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:16:19.587427       1 config.go:188] "Starting service config controller"
	I1005 21:16:19.587568       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:16:19.587627       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:16:19.587657       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:16:19.588257       1 config.go:315] "Starting node config controller"
	I1005 21:16:19.588318       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:16:19.689208       1 shared_informer.go:318] Caches are synced for node config
	I1005 21:16:19.689354       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:16:19.689414       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [dfda9ebf5a27845b5492e3fe47db8f72a13aac443f802d5f850927d68e660bc3] <==
	* I1005 21:15:58.558185       1 serving.go:348] Generated self-signed cert in-memory
	W1005 21:15:59.654934       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1005 21:15:59.654968       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 21:15:59.654983       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1005 21:15:59.654993       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1005 21:15:59.681263       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1005 21:15:59.681293       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:15:59.683086       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1005 21:15:59.683159       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:15:59.683941       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1005 21:15:59.684009       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1005 21:15:59.700132       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:15:59.701910       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1005 21:16:01.183977       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 05 21:21:09 addons-792068 kubelet[1359]: E1005 21:21:09.922396    1359 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-bfjfd_default(274e937f-ae75-4a74-aef5-9255526c29d0)\"" pod="default/hello-world-app-5d77478584-bfjfd" podUID="274e937f-ae75-4a74-aef5-9255526c29d0"
	Oct 05 21:21:19 addons-792068 kubelet[1359]: I1005 21:21:19.800117    1359 scope.go:117] "RemoveContainer" containerID="fccb251b0fedb05db4dcd034eb030339eefd52ff65fcf0c894e2edc76d1fd4af"
	Oct 05 21:21:19 addons-792068 kubelet[1359]: E1005 21:21:19.800373    1359 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(48f7f163-debd-4bd8-87ce-79377bf5170c)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="48f7f163-debd-4bd8-87ce-79377bf5170c"
	Oct 05 21:21:20 addons-792068 kubelet[1359]: I1005 21:21:20.527484    1359 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p872j\" (UniqueName: \"kubernetes.io/projected/48f7f163-debd-4bd8-87ce-79377bf5170c-kube-api-access-p872j\") pod \"48f7f163-debd-4bd8-87ce-79377bf5170c\" (UID: \"48f7f163-debd-4bd8-87ce-79377bf5170c\") "
	Oct 05 21:21:20 addons-792068 kubelet[1359]: I1005 21:21:20.533233    1359 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f7f163-debd-4bd8-87ce-79377bf5170c-kube-api-access-p872j" (OuterVolumeSpecName: "kube-api-access-p872j") pod "48f7f163-debd-4bd8-87ce-79377bf5170c" (UID: "48f7f163-debd-4bd8-87ce-79377bf5170c"). InnerVolumeSpecName "kube-api-access-p872j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 21:21:20 addons-792068 kubelet[1359]: I1005 21:21:20.628512    1359 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p872j\" (UniqueName: \"kubernetes.io/projected/48f7f163-debd-4bd8-87ce-79377bf5170c-kube-api-access-p872j\") on node \"addons-792068\" DevicePath \"\""
	Oct 05 21:21:20 addons-792068 kubelet[1359]: I1005 21:21:20.948842    1359 scope.go:117] "RemoveContainer" containerID="fccb251b0fedb05db4dcd034eb030339eefd52ff65fcf0c894e2edc76d1fd4af"
	Oct 05 21:21:21 addons-792068 kubelet[1359]: I1005 21:21:21.801959    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="001705bf-5fa5-46ca-9159-f1c6ce96adb0" path="/var/lib/kubelet/pods/001705bf-5fa5-46ca-9159-f1c6ce96adb0/volumes"
	Oct 05 21:21:21 addons-792068 kubelet[1359]: I1005 21:21:21.802801    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="188340a7-8f42-4c8a-b994-274ac0bc605c" path="/var/lib/kubelet/pods/188340a7-8f42-4c8a-b994-274ac0bc605c/volumes"
	Oct 05 21:21:21 addons-792068 kubelet[1359]: I1005 21:21:21.803901    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="48f7f163-debd-4bd8-87ce-79377bf5170c" path="/var/lib/kubelet/pods/48f7f163-debd-4bd8-87ce-79377bf5170c/volumes"
	Oct 05 21:21:23 addons-792068 kubelet[1359]: I1005 21:21:23.799674    1359 scope.go:117] "RemoveContainer" containerID="5c494f2b3f31b4080a57053b4a0436d5fd83a3d8bb542a1e4462439ce3a74fea"
	Oct 05 21:21:23 addons-792068 kubelet[1359]: I1005 21:21:23.957875    1359 scope.go:117] "RemoveContainer" containerID="5c494f2b3f31b4080a57053b4a0436d5fd83a3d8bb542a1e4462439ce3a74fea"
	Oct 05 21:21:23 addons-792068 kubelet[1359]: I1005 21:21:23.958099    1359 scope.go:117] "RemoveContainer" containerID="a79eacb7704eaf5e01bc04b7132a9dd5cb6bca74e9f7e04133c65679bbdc02ce"
	Oct 05 21:21:23 addons-792068 kubelet[1359]: E1005 21:21:23.958402    1359 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-bfjfd_default(274e937f-ae75-4a74-aef5-9255526c29d0)\"" pod="default/hello-world-app-5d77478584-bfjfd" podUID="274e937f-ae75-4a74-aef5-9255526c29d0"
	Oct 05 21:21:24 addons-792068 kubelet[1359]: I1005 21:21:24.958031    1359 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bbmv\" (UniqueName: \"kubernetes.io/projected/48e9336a-a7ca-4ee7-ab6f-e739b0be0a32-kube-api-access-4bbmv\") pod \"48e9336a-a7ca-4ee7-ab6f-e739b0be0a32\" (UID: \"48e9336a-a7ca-4ee7-ab6f-e739b0be0a32\") "
	Oct 05 21:21:24 addons-792068 kubelet[1359]: I1005 21:21:24.958083    1359 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48e9336a-a7ca-4ee7-ab6f-e739b0be0a32-webhook-cert\") pod \"48e9336a-a7ca-4ee7-ab6f-e739b0be0a32\" (UID: \"48e9336a-a7ca-4ee7-ab6f-e739b0be0a32\") "
	Oct 05 21:21:24 addons-792068 kubelet[1359]: I1005 21:21:24.961445    1359 scope.go:117] "RemoveContainer" containerID="ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6"
	Oct 05 21:21:24 addons-792068 kubelet[1359]: I1005 21:21:24.964260    1359 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e9336a-a7ca-4ee7-ab6f-e739b0be0a32-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "48e9336a-a7ca-4ee7-ab6f-e739b0be0a32" (UID: "48e9336a-a7ca-4ee7-ab6f-e739b0be0a32"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:21:24 addons-792068 kubelet[1359]: I1005 21:21:24.967284    1359 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e9336a-a7ca-4ee7-ab6f-e739b0be0a32-kube-api-access-4bbmv" (OuterVolumeSpecName: "kube-api-access-4bbmv") pod "48e9336a-a7ca-4ee7-ab6f-e739b0be0a32" (UID: "48e9336a-a7ca-4ee7-ab6f-e739b0be0a32"). InnerVolumeSpecName "kube-api-access-4bbmv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 05 21:21:24 addons-792068 kubelet[1359]: I1005 21:21:24.984480    1359 scope.go:117] "RemoveContainer" containerID="ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6"
	Oct 05 21:21:24 addons-792068 kubelet[1359]: E1005 21:21:24.984913    1359 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6\": container with ID starting with ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6 not found: ID does not exist" containerID="ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6"
	Oct 05 21:21:24 addons-792068 kubelet[1359]: I1005 21:21:24.984973    1359 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6"} err="failed to get container status \"ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6\": rpc error: code = NotFound desc = could not find container \"ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6\": container with ID starting with ad5d348f8aca2b2473b083d4591301887f94f2ec43a30e0acf93baba83d60de6 not found: ID does not exist"
	Oct 05 21:21:25 addons-792068 kubelet[1359]: I1005 21:21:25.058947    1359 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4bbmv\" (UniqueName: \"kubernetes.io/projected/48e9336a-a7ca-4ee7-ab6f-e739b0be0a32-kube-api-access-4bbmv\") on node \"addons-792068\" DevicePath \"\""
	Oct 05 21:21:25 addons-792068 kubelet[1359]: I1005 21:21:25.059008    1359 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48e9336a-a7ca-4ee7-ab6f-e739b0be0a32-webhook-cert\") on node \"addons-792068\" DevicePath \"\""
	Oct 05 21:21:25 addons-792068 kubelet[1359]: I1005 21:21:25.801111    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="48e9336a-a7ca-4ee7-ab6f-e739b0be0a32" path="/var/lib/kubelet/pods/48e9336a-a7ca-4ee7-ab6f-e739b0be0a32/volumes"
	
	* 
	* ==> storage-provisioner [505f0e88dd6446d8b1988ae79e235dabc7229933394ed3a12aace15f66c5cba3] <==
	* I1005 21:16:47.208010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 21:16:47.236450       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 21:16:47.236629       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 21:16:47.244825       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 21:16:47.245644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcdc6718-d235-4d55-9e38-42895f0a388d", APIVersion:"v1", ResourceVersion:"860", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-792068_d1dd3836-5a5c-4fb5-b12c-6124d16a60b7 became leader
	I1005 21:16:47.247583       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-792068_d1dd3836-5a5c-4fb5-b12c-6124d16a60b7!
	I1005 21:16:47.350748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-792068_d1dd3836-5a5c-4fb5-b12c-6124d16a60b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-792068 -n addons-792068
helpers_test.go:261: (dbg) Run:  kubectl --context addons-792068 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (170.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:205: (dbg) Run:  kubectl --context ingress-addon-legacy-570164 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:205: (dbg) Done: kubectl --context ingress-addon-legacy-570164 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.271282026s)
addons_test.go:230: (dbg) Run:  kubectl --context ingress-addon-legacy-570164 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-570164 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [41d46131-db93-4f1b-b6af-d8d50d6e3c82] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [41d46131-db93-4f1b-b6af-d8d50d6e3c82] Running
addons_test.go:248: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.012957793s
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-570164 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1005 21:30:49.946651 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:49.952011 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:49.962373 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:49.982612 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:50.022899 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:50.103319 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:50.263520 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:50.584063 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:51.225273 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:52.505544 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:30:55.065792 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:31:00.186085 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:31:10.426874 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-570164 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.734589882s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context ingress-addon-legacy-570164 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-570164 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:295: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.0202246s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:297: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:301: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-570164 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-570164 addons disable ingress-dns --alsologtostderr -v=1: (1.16419295s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-570164 addons disable ingress --alsologtostderr -v=1
E1005 21:31:30.907936 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-570164 addons disable ingress --alsologtostderr -v=1: (7.562269067s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-570164
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-570164:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7abdce9b823965d66c46ba909ec2490c37e4da3d8f7b4cb1f0e467fb7a56fa1d",
	        "Created": "2023-10-05T21:27:09.918339735Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1482039,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:27:10.265072923Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/7abdce9b823965d66c46ba909ec2490c37e4da3d8f7b4cb1f0e467fb7a56fa1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7abdce9b823965d66c46ba909ec2490c37e4da3d8f7b4cb1f0e467fb7a56fa1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/7abdce9b823965d66c46ba909ec2490c37e4da3d8f7b4cb1f0e467fb7a56fa1d/hosts",
	        "LogPath": "/var/lib/docker/containers/7abdce9b823965d66c46ba909ec2490c37e4da3d8f7b4cb1f0e467fb7a56fa1d/7abdce9b823965d66c46ba909ec2490c37e4da3d8f7b4cb1f0e467fb7a56fa1d-json.log",
	        "Name": "/ingress-addon-legacy-570164",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-570164:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-570164",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/36c10cf1ca70cc96bb391a033d9c0876223aea38fab74ce7928518f2d5170823-init/diff:/var/lib/docker/overlay2/d90b9e2f667f252141d832d5a382f20f93e3e59a1248437095891beeaafeffd3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36c10cf1ca70cc96bb391a033d9c0876223aea38fab74ce7928518f2d5170823/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36c10cf1ca70cc96bb391a033d9c0876223aea38fab74ce7928518f2d5170823/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36c10cf1ca70cc96bb391a033d9c0876223aea38fab74ce7928518f2d5170823/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-570164",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-570164/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-570164",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-570164",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-570164",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8be57ec72b3a7b94a4ab7df649a823869fe138d8f502a20fb9b9c75fc9f8cbc9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34092"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34089"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8be57ec72b3a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-570164": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7abdce9b8239",
	                        "ingress-addon-legacy-570164"
	                    ],
	                    "NetworkID": "56d163a3192f768ce683fa6b6f9f6120836c9875fa9f808c6ca853dc6eeaa6cb",
	                    "EndpointID": "26e98a065598331ce9fe00ebe3d8f4f0061762372e79816c90ef174f3411f71a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-570164 -n ingress-addon-legacy-570164
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-570164 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-570164 logs -n 25: (1.544900525s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-322912                 | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| ssh            | functional-322912 ssh findmnt        | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | -T /mount1                           |                             |         |         |                     |                     |
	| start          | -p functional-322912                 | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| ssh            | functional-322912 ssh findmnt        | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | -p functional-322912                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| ssh            | functional-322912 ssh findmnt        | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-322912                 | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| update-context | functional-322912                    | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-322912                    | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-322912                    | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-322912                    | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-322912                    | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-322912 ssh pgrep          | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-322912 image build -t     | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | localhost/my-image:functional-322912 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-322912 image ls           | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	| image          | functional-322912                    | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-322912                    | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-322912                 | functional-322912           | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:26 UTC |
	| start          | -p ingress-addon-legacy-570164       | ingress-addon-legacy-570164 | jenkins | v1.31.2 | 05 Oct 23 21:26 UTC | 05 Oct 23 21:28 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-570164          | ingress-addon-legacy-570164 | jenkins | v1.31.2 | 05 Oct 23 21:28 UTC | 05 Oct 23 21:28 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-570164          | ingress-addon-legacy-570164 | jenkins | v1.31.2 | 05 Oct 23 21:28 UTC | 05 Oct 23 21:28 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-570164          | ingress-addon-legacy-570164 | jenkins | v1.31.2 | 05 Oct 23 21:29 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-570164 ip       | ingress-addon-legacy-570164 | jenkins | v1.31.2 | 05 Oct 23 21:31 UTC | 05 Oct 23 21:31 UTC |
	| addons         | ingress-addon-legacy-570164          | ingress-addon-legacy-570164 | jenkins | v1.31.2 | 05 Oct 23 21:31 UTC | 05 Oct 23 21:31 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-570164          | ingress-addon-legacy-570164 | jenkins | v1.31.2 | 05 Oct 23 21:31 UTC | 05 Oct 23 21:31 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:26:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:26:53.238077 1481584 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:26:53.238355 1481584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:26:53.238383 1481584 out.go:309] Setting ErrFile to fd 2...
	I1005 21:26:53.238404 1481584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:26:53.239123 1481584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:26:53.239683 1481584 out.go:303] Setting JSON to false
	I1005 21:26:53.240692 1481584 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25761,"bootTime":1696515453,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:26:53.240809 1481584 start.go:138] virtualization:  
	I1005 21:26:53.243663 1481584 out.go:177] * [ingress-addon-legacy-570164] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:26:53.244981 1481584 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:26:53.246922 1481584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:26:53.245177 1481584 notify.go:220] Checking for updates...
	I1005 21:26:53.250625 1481584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:26:53.252520 1481584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:26:53.254053 1481584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:26:53.255873 1481584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:26:53.257975 1481584 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:26:53.282658 1481584 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:26:53.282764 1481584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:26:53.364194 1481584 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-05 21:26:53.354399425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:26:53.364301 1481584 docker.go:294] overlay module found
	I1005 21:26:53.366450 1481584 out.go:177] * Using the docker driver based on user configuration
	I1005 21:26:53.368145 1481584 start.go:298] selected driver: docker
	I1005 21:26:53.368159 1481584 start.go:902] validating driver "docker" against <nil>
	I1005 21:26:53.368172 1481584 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:26:53.368800 1481584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:26:53.438259 1481584 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-05 21:26:53.428410854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:26:53.438445 1481584 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:26:53.438690 1481584 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 21:26:53.440520 1481584 out.go:177] * Using Docker driver with root privileges
	I1005 21:26:53.442353 1481584 cni.go:84] Creating CNI manager for ""
	I1005 21:26:53.442375 1481584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:26:53.442386 1481584 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:26:53.442400 1481584 start_flags.go:321] config:
	{Name:ingress-addon-legacy-570164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:26:53.444565 1481584 out.go:177] * Starting control plane node ingress-addon-legacy-570164 in cluster ingress-addon-legacy-570164
	I1005 21:26:53.446493 1481584 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:26:53.448254 1481584 out.go:177] * Pulling base image ...
	I1005 21:26:53.450365 1481584 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 21:26:53.450418 1481584 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:26:53.467854 1481584 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 21:26:53.467876 1481584 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 21:26:53.521114 1481584 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1005 21:26:53.521139 1481584 cache.go:57] Caching tarball of preloaded images
	I1005 21:26:53.521310 1481584 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 21:26:53.523571 1481584 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1005 21:26:53.525433 1481584 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1005 21:26:53.648033 1481584 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1005 21:27:01.975938 1481584 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1005 21:27:01.976048 1481584 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1005 21:27:03.192540 1481584 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1005 21:27:03.192921 1481584 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/config.json ...
	I1005 21:27:03.192958 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/config.json: {Name:mk6e0d509eb837189ca12dc29ac118d805dc6548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:03.193156 1481584 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:27:03.193184 1481584 start.go:365] acquiring machines lock for ingress-addon-legacy-570164: {Name:mk1035961929d0c7d8645bf6b61f7fd6d2dd484d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:27:03.193246 1481584 start.go:369] acquired machines lock for "ingress-addon-legacy-570164" in 47.089µs
	I1005 21:27:03.193271 1481584 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-570164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570164 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:27:03.193367 1481584 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:27:03.195980 1481584 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1005 21:27:03.196299 1481584 start.go:159] libmachine.API.Create for "ingress-addon-legacy-570164" (driver="docker")
	I1005 21:27:03.196339 1481584 client.go:168] LocalClient.Create starting
	I1005 21:27:03.196441 1481584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem
	I1005 21:27:03.196482 1481584 main.go:141] libmachine: Decoding PEM data...
	I1005 21:27:03.196502 1481584 main.go:141] libmachine: Parsing certificate...
	I1005 21:27:03.196578 1481584 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem
	I1005 21:27:03.196600 1481584 main.go:141] libmachine: Decoding PEM data...
	I1005 21:27:03.196615 1481584 main.go:141] libmachine: Parsing certificate...
	I1005 21:27:03.196982 1481584 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-570164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:27:03.214980 1481584 cli_runner.go:211] docker network inspect ingress-addon-legacy-570164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:27:03.215064 1481584 network_create.go:281] running [docker network inspect ingress-addon-legacy-570164] to gather additional debugging logs...
	I1005 21:27:03.215085 1481584 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-570164
	W1005 21:27:03.233076 1481584 cli_runner.go:211] docker network inspect ingress-addon-legacy-570164 returned with exit code 1
	I1005 21:27:03.233109 1481584 network_create.go:284] error running [docker network inspect ingress-addon-legacy-570164]: docker network inspect ingress-addon-legacy-570164: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-570164 not found
	I1005 21:27:03.233123 1481584 network_create.go:286] output of [docker network inspect ingress-addon-legacy-570164]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-570164 not found
	
	** /stderr **
	I1005 21:27:03.233221 1481584 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:27:03.251420 1481584 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e410}
	I1005 21:27:03.251466 1481584 network_create.go:124] attempt to create docker network ingress-addon-legacy-570164 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1005 21:27:03.251530 1481584 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-570164 ingress-addon-legacy-570164
	I1005 21:27:03.330858 1481584 network_create.go:108] docker network ingress-addon-legacy-570164 192.168.49.0/24 created
	I1005 21:27:03.330891 1481584 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-570164" container
	I1005 21:27:03.330968 1481584 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:27:03.348026 1481584 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-570164 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570164 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:27:03.367096 1481584 oci.go:103] Successfully created a docker volume ingress-addon-legacy-570164
	I1005 21:27:03.367191 1481584 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-570164-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570164 --entrypoint /usr/bin/test -v ingress-addon-legacy-570164:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:27:04.873166 1481584 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-570164-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570164 --entrypoint /usr/bin/test -v ingress-addon-legacy-570164:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.505928043s)
	I1005 21:27:04.873200 1481584 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-570164
	I1005 21:27:04.873220 1481584 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 21:27:04.873241 1481584 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:27:04.873374 1481584 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-570164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:27:09.830412 1481584 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-570164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.956991227s)
	I1005 21:27:09.830446 1481584 kic.go:199] duration metric: took 4.957200 seconds to extract preloaded images to volume
	W1005 21:27:09.830589 1481584 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:27:09.830694 1481584 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:27:09.901625 1481584 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-570164 --name ingress-addon-legacy-570164 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-570164 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-570164 --network ingress-addon-legacy-570164 --ip 192.168.49.2 --volume ingress-addon-legacy-570164:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:27:10.274056 1481584 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570164 --format={{.State.Running}}
	I1005 21:27:10.295421 1481584 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570164 --format={{.State.Status}}
	I1005 21:27:10.327978 1481584 cli_runner.go:164] Run: docker exec ingress-addon-legacy-570164 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:27:10.406412 1481584 oci.go:144] the created container "ingress-addon-legacy-570164" has a running status.
	I1005 21:27:10.406457 1481584 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa...
	I1005 21:27:11.321140 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 21:27:11.321201 1481584 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:27:11.356927 1481584 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570164 --format={{.State.Status}}
	I1005 21:27:11.378412 1481584 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:27:11.378432 1481584 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-570164 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:27:11.450582 1481584 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570164 --format={{.State.Status}}
	I1005 21:27:11.474002 1481584 machine.go:88] provisioning docker machine ...
	I1005 21:27:11.474033 1481584 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-570164"
	I1005 21:27:11.474098 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:11.493459 1481584 main.go:141] libmachine: Using SSH client type: native
	I1005 21:27:11.493878 1481584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34092 <nil> <nil>}
	I1005 21:27:11.493892 1481584 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-570164 && echo "ingress-addon-legacy-570164" | sudo tee /etc/hostname
	I1005 21:27:11.646233 1481584 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-570164
	
	I1005 21:27:11.646376 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:11.670778 1481584 main.go:141] libmachine: Using SSH client type: native
	I1005 21:27:11.671238 1481584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34092 <nil> <nil>}
	I1005 21:27:11.671258 1481584 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-570164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-570164/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-570164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:27:11.806726 1481584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:27:11.806754 1481584 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:27:11.806775 1481584 ubuntu.go:177] setting up certificates
	I1005 21:27:11.806792 1481584 provision.go:83] configureAuth start
	I1005 21:27:11.806883 1481584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-570164
	I1005 21:27:11.826795 1481584 provision.go:138] copyHostCerts
	I1005 21:27:11.826840 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:27:11.826875 1481584 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 21:27:11.826889 1481584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:27:11.826972 1481584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:27:11.827057 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:27:11.827079 1481584 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 21:27:11.827087 1481584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:27:11.827117 1481584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:27:11.827165 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:27:11.827184 1481584 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 21:27:11.827192 1481584 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:27:11.827220 1481584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:27:11.827270 1481584 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-570164 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-570164]
	I1005 21:27:12.154927 1481584 provision.go:172] copyRemoteCerts
	I1005 21:27:12.155003 1481584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:27:12.155049 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:12.174269 1481584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34092 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa Username:docker}
	I1005 21:27:12.272164 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 21:27:12.272228 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:27:12.300997 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 21:27:12.301070 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1005 21:27:12.329912 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 21:27:12.329975 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 21:27:12.358162 1481584 provision.go:86] duration metric: configureAuth took 551.350735ms
	I1005 21:27:12.358234 1481584 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:27:12.358451 1481584 config.go:182] Loaded profile config "ingress-addon-legacy-570164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1005 21:27:12.358598 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:12.377620 1481584 main.go:141] libmachine: Using SSH client type: native
	I1005 21:27:12.378050 1481584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34092 <nil> <nil>}
	I1005 21:27:12.378073 1481584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:27:12.650413 1481584 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:27:12.650439 1481584 machine.go:91] provisioned docker machine in 1.17641608s
	I1005 21:27:12.650449 1481584 client.go:171] LocalClient.Create took 9.454097871s
	I1005 21:27:12.650466 1481584 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-570164" took 9.454166975s
	I1005 21:27:12.650481 1481584 start.go:300] post-start starting for "ingress-addon-legacy-570164" (driver="docker")
	I1005 21:27:12.650491 1481584 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:27:12.650565 1481584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:27:12.650617 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:12.667915 1481584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34092 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa Username:docker}
	I1005 21:27:12.768391 1481584 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:27:12.772663 1481584 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:27:12.772699 1481584 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:27:12.772712 1481584 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:27:12.772720 1481584 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:27:12.772734 1481584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:27:12.772805 1481584 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:27:12.772897 1481584 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 21:27:12.772908 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> /etc/ssl/certs/14537862.pem
	I1005 21:27:12.773036 1481584 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:27:12.783683 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:27:12.812190 1481584 start.go:303] post-start completed in 161.693561ms
	I1005 21:27:12.812592 1481584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-570164
	I1005 21:27:12.830541 1481584 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/config.json ...
	I1005 21:27:12.830821 1481584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:27:12.830881 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:12.848974 1481584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34092 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa Username:docker}
	I1005 21:27:12.939438 1481584 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:27:12.945206 1481584 start.go:128] duration metric: createHost completed in 9.751822126s
	I1005 21:27:12.945230 1481584 start.go:83] releasing machines lock for "ingress-addon-legacy-570164", held for 9.751971615s
	I1005 21:27:12.945303 1481584 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-570164
	I1005 21:27:12.962625 1481584 ssh_runner.go:195] Run: cat /version.json
	I1005 21:27:12.962680 1481584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:27:12.962748 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:12.962684 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:27:12.987705 1481584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34092 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa Username:docker}
	I1005 21:27:12.993787 1481584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34092 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa Username:docker}
	I1005 21:27:13.218419 1481584 ssh_runner.go:195] Run: systemctl --version
	I1005 21:27:13.224406 1481584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 21:27:13.381258 1481584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:27:13.387304 1481584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:27:13.412510 1481584 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:27:13.412593 1481584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:27:13.448295 1481584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 21:27:13.448321 1481584 start.go:469] detecting cgroup driver to use...
	I1005 21:27:13.448383 1481584 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:27:13.448459 1481584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 21:27:13.467874 1481584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 21:27:13.481848 1481584 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:27:13.481940 1481584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:27:13.498831 1481584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:27:13.516340 1481584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:27:13.616390 1481584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:27:13.727161 1481584 docker.go:213] disabling docker service ...
	I1005 21:27:13.727271 1481584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:27:13.750531 1481584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:27:13.766212 1481584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:27:13.867327 1481584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:27:13.976016 1481584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:27:13.989268 1481584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:27:14.011760 1481584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1005 21:27:14.011878 1481584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:27:14.025562 1481584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 21:27:14.025707 1481584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:27:14.038181 1481584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:27:14.050407 1481584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:27:14.062819 1481584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:27:14.074539 1481584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:27:14.084886 1481584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:27:14.095354 1481584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:27:14.194581 1481584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 21:27:14.322330 1481584 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 21:27:14.322405 1481584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 21:27:14.327139 1481584 start.go:537] Will wait 60s for crictl version
	I1005 21:27:14.327211 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:14.331585 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:27:14.378730 1481584 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 21:27:14.378815 1481584 ssh_runner.go:195] Run: crio --version
	I1005 21:27:14.425893 1481584 ssh_runner.go:195] Run: crio --version
	I1005 21:27:14.472537 1481584 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1005 21:27:14.474455 1481584 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-570164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:27:14.495397 1481584 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1005 21:27:14.500561 1481584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:27:14.515042 1481584 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1005 21:27:14.515121 1481584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:27:14.568224 1481584 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1005 21:27:14.568298 1481584 ssh_runner.go:195] Run: which lz4
	I1005 21:27:14.572895 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1005 21:27:14.573040 1481584 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1005 21:27:14.577248 1481584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1005 21:27:14.577286 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1005 21:27:16.843669 1481584 crio.go:444] Took 2.270673 seconds to copy over tarball
	I1005 21:27:16.843790 1481584 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1005 21:27:19.595969 1481584 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.752118761s)
	I1005 21:27:19.596015 1481584 crio.go:451] Took 2.752283 seconds to extract the tarball
	I1005 21:27:19.596027 1481584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1005 21:27:19.682535 1481584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:27:19.728236 1481584 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1005 21:27:19.728262 1481584 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1005 21:27:19.728301 1481584 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:27:19.728507 1481584 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:27:19.728583 1481584 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:27:19.728636 1481584 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:27:19.728731 1481584 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:27:19.728805 1481584 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1005 21:27:19.728881 1481584 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1005 21:27:19.728953 1481584 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1005 21:27:19.730192 1481584 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1005 21:27:19.730602 1481584 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1005 21:27:19.730810 1481584 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1005 21:27:19.730959 1481584 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:27:19.731120 1481584 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:27:19.731257 1481584 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:27:19.731472 1481584 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:27:19.731784 1481584 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:27:20.160707 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1005 21:27:20.198768 1481584 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1005 21:27:20.199034 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1005 21:27:20.200369 1481584 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:27:20.200581 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:27:20.217673 1481584 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1005 21:27:20.217787 1481584 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1005 21:27:20.217854 1481584 ssh_runner.go:195] Run: which crictl
	W1005 21:27:20.238704 1481584 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:27:20.238919 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1005 21:27:20.252203 1481584 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1005 21:27:20.252450 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1005 21:27:20.257886 1481584 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:27:20.258108 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1005 21:27:20.289842 1481584 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1005 21:27:20.290139 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:27:20.299473 1481584 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1005 21:27:20.299517 1481584 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1005 21:27:20.299585 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:20.299688 1481584 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1005 21:27:20.299713 1481584 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:27:20.299755 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:20.299840 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	W1005 21:27:20.326011 1481584 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1005 21:27:20.326215 1481584 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:27:20.386731 1481584 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1005 21:27:20.386796 1481584 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:27:20.386846 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:20.436646 1481584 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1005 21:27:20.436691 1481584 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1005 21:27:20.436768 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:20.436870 1481584 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1005 21:27:20.436915 1481584 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:27:20.436949 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:20.459045 1481584 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1005 21:27:20.459108 1481584 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:27:20.459164 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:20.459290 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1005 21:27:20.459353 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1005 21:27:20.459425 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1005 21:27:20.584485 1481584 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1005 21:27:20.584679 1481584 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:27:20.584701 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1005 21:27:20.584751 1481584 ssh_runner.go:195] Run: which crictl
	I1005 21:27:20.584642 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1005 21:27:20.584800 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1005 21:27:20.584857 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1005 21:27:20.584896 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1005 21:27:20.584956 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1005 21:27:20.696621 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1005 21:27:20.696688 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1005 21:27:20.696735 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1005 21:27:20.696801 1481584 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:27:20.696893 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1005 21:27:20.764216 1481584 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1005 21:27:20.764293 1481584 cache_images.go:92] LoadImages completed in 1.036016944s
	W1005 21:27:20.764350 1481584 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1005 21:27:20.764418 1481584 ssh_runner.go:195] Run: crio config
	I1005 21:27:20.838094 1481584 cni.go:84] Creating CNI manager for ""
	I1005 21:27:20.838126 1481584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:27:20.838172 1481584 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:27:20.838210 1481584 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-570164 NodeName:ingress-addon-legacy-570164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1005 21:27:20.838398 1481584 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-570164"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:27:20.838531 1481584 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-570164 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570164 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:27:20.838631 1481584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1005 21:27:20.849138 1481584 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:27:20.849210 1481584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 21:27:20.859745 1481584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1005 21:27:20.880826 1481584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1005 21:27:20.901870 1481584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1005 21:27:20.922827 1481584 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:27:20.927446 1481584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:27:20.940547 1481584 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164 for IP: 192.168.49.2
	I1005 21:27:20.940629 1481584 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac5d4c0ae883432caac512ac8160283213d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:20.940800 1481584 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key
	I1005 21:27:20.940863 1481584 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key
	I1005 21:27:20.940929 1481584 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.key
	I1005 21:27:20.940945 1481584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt with IP's: []
	I1005 21:27:21.350816 1481584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt ...
	I1005 21:27:21.350848 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: {Name:mk81fcf2dd36fd6d7a282cd7a6f69bb529a3ed88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:21.351057 1481584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.key ...
	I1005 21:27:21.351072 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.key: {Name:mk0ea0d4fc4ed96bbb07307cfa323389e99e34cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:21.351158 1481584 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.key.dd3b5fb2
	I1005 21:27:21.351170 1481584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 21:27:21.699060 1481584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.crt.dd3b5fb2 ...
	I1005 21:27:21.699089 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.crt.dd3b5fb2: {Name:mk77cca3792903e675fc29d09c21fbfcea8e6170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:21.699270 1481584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.key.dd3b5fb2 ...
	I1005 21:27:21.699284 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.key.dd3b5fb2: {Name:mk5c30877867a2631cb889bf62a23de7b5fe7806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:21.699368 1481584 certs.go:337] copying /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.crt
	I1005 21:27:21.699447 1481584 certs.go:341] copying /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.key
	I1005 21:27:21.699505 1481584 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.key
	I1005 21:27:21.699523 1481584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.crt with IP's: []
	I1005 21:27:22.187382 1481584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.crt ...
	I1005 21:27:22.187414 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.crt: {Name:mk8fb24f8b5e76546020b74bc8f84c606044fbe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:22.187600 1481584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.key ...
	I1005 21:27:22.187614 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.key: {Name:mkf0ba66a3de0739a798aa98959381ef5c1dc02a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:27:22.187710 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1005 21:27:22.187734 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1005 21:27:22.187750 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1005 21:27:22.187766 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1005 21:27:22.187785 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1005 21:27:22.187809 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1005 21:27:22.187826 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1005 21:27:22.187841 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1005 21:27:22.187900 1481584 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem (1338 bytes)
	W1005 21:27:22.187942 1481584 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786_empty.pem, impossibly tiny 0 bytes
	I1005 21:27:22.187956 1481584 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:27:22.187983 1481584 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:27:22.188020 1481584 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:27:22.188067 1481584 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem (1675 bytes)
	I1005 21:27:22.188118 1481584 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:27:22.188149 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem -> /usr/share/ca-certificates/1453786.pem
	I1005 21:27:22.188168 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> /usr/share/ca-certificates/14537862.pem
	I1005 21:27:22.188183 1481584 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:27:22.188763 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 21:27:22.217863 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 21:27:22.247305 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 21:27:22.275885 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1005 21:27:22.304227 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:27:22.332688 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 21:27:22.361281 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:27:22.390101 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:27:22.418978 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem --> /usr/share/ca-certificates/1453786.pem (1338 bytes)
	I1005 21:27:22.447308 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /usr/share/ca-certificates/14537862.pem (1708 bytes)
	I1005 21:27:22.476265 1481584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:27:22.506093 1481584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 21:27:22.527896 1481584 ssh_runner.go:195] Run: openssl version
	I1005 21:27:22.535035 1481584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1453786.pem && ln -fs /usr/share/ca-certificates/1453786.pem /etc/ssl/certs/1453786.pem"
	I1005 21:27:22.546864 1481584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1453786.pem
	I1005 21:27:22.551607 1481584 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 21:22 /usr/share/ca-certificates/1453786.pem
	I1005 21:27:22.551679 1481584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1453786.pem
	I1005 21:27:22.560166 1481584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1453786.pem /etc/ssl/certs/51391683.0"
	I1005 21:27:22.571721 1481584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14537862.pem && ln -fs /usr/share/ca-certificates/14537862.pem /etc/ssl/certs/14537862.pem"
	I1005 21:27:22.583323 1481584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14537862.pem
	I1005 21:27:22.588071 1481584 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 21:22 /usr/share/ca-certificates/14537862.pem
	I1005 21:27:22.588135 1481584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14537862.pem
	I1005 21:27:22.596802 1481584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14537862.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 21:27:22.608515 1481584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:27:22.620162 1481584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:27:22.624824 1481584 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:27:22.624937 1481584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:27:22.633890 1481584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:27:22.645558 1481584 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:27:22.650073 1481584 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:27:22.650166 1481584 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-570164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-570164 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:27:22.650265 1481584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1005 21:27:22.650333 1481584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:27:22.694308 1481584 cri.go:89] found id: ""
	I1005 21:27:22.694428 1481584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 21:27:22.704803 1481584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 21:27:22.715460 1481584 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 21:27:22.715572 1481584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 21:27:22.725997 1481584 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 21:27:22.726062 1481584 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 21:27:22.790435 1481584 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1005 21:27:22.790739 1481584 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 21:27:22.845103 1481584 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:27:22.845172 1481584 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:27:22.845211 1481584 kubeadm.go:322] OS: Linux
	I1005 21:27:22.845261 1481584 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 21:27:22.845362 1481584 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 21:27:22.845414 1481584 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 21:27:22.845470 1481584 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 21:27:22.845519 1481584 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 21:27:22.845572 1481584 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 21:27:22.940756 1481584 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 21:27:22.940862 1481584 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 21:27:22.940953 1481584 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 21:27:23.198690 1481584 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:27:23.202338 1481584 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:27:23.202422 1481584 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 21:27:23.309725 1481584 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 21:27:23.314322 1481584 out.go:204]   - Generating certificates and keys ...
	I1005 21:27:23.314443 1481584 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 21:27:23.314519 1481584 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 21:27:23.656840 1481584 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 21:27:24.804656 1481584 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 21:27:25.598493 1481584 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 21:27:26.104289 1481584 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 21:27:26.575431 1481584 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 21:27:26.575859 1481584 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-570164 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:27:27.164364 1481584 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 21:27:27.164802 1481584 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-570164 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1005 21:27:27.824667 1481584 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 21:27:28.365073 1481584 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 21:27:28.652413 1481584 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 21:27:28.652730 1481584 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 21:27:29.004173 1481584 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 21:27:29.817068 1481584 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 21:27:30.524727 1481584 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 21:27:31.388847 1481584 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 21:27:31.390233 1481584 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 21:27:31.392974 1481584 out.go:204]   - Booting up control plane ...
	I1005 21:27:31.393096 1481584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 21:27:31.401736 1481584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 21:27:31.405094 1481584 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 21:27:31.406824 1481584 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 21:27:31.412084 1481584 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 21:27:43.417134 1481584 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002922 seconds
	I1005 21:27:43.417253 1481584 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 21:27:43.428729 1481584 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 21:27:43.954908 1481584 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 21:27:43.955085 1481584 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-570164 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1005 21:27:44.464601 1481584 kubeadm.go:322] [bootstrap-token] Using token: k9qxcl.06hyikjxq7tv3dc3
	I1005 21:27:44.466779 1481584 out.go:204]   - Configuring RBAC rules ...
	I1005 21:27:44.466924 1481584 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 21:27:44.473905 1481584 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 21:27:44.487071 1481584 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 21:27:44.490491 1481584 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 21:27:44.494009 1481584 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 21:27:44.496889 1481584 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 21:27:44.507461 1481584 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 21:27:44.796586 1481584 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 21:27:44.906239 1481584 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 21:27:44.907592 1481584 kubeadm.go:322] 
	I1005 21:27:44.907664 1481584 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 21:27:44.907673 1481584 kubeadm.go:322] 
	I1005 21:27:44.907755 1481584 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 21:27:44.907764 1481584 kubeadm.go:322] 
	I1005 21:27:44.907788 1481584 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 21:27:44.907843 1481584 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 21:27:44.907895 1481584 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 21:27:44.907911 1481584 kubeadm.go:322] 
	I1005 21:27:44.907960 1481584 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 21:27:44.908033 1481584 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 21:27:44.908103 1481584 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 21:27:44.908111 1481584 kubeadm.go:322] 
	I1005 21:27:44.908190 1481584 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 21:27:44.908267 1481584 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 21:27:44.908278 1481584 kubeadm.go:322] 
	I1005 21:27:44.908356 1481584 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k9qxcl.06hyikjxq7tv3dc3 \
	I1005 21:27:44.908466 1481584 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d \
	I1005 21:27:44.908492 1481584 kubeadm.go:322]     --control-plane 
	I1005 21:27:44.908500 1481584 kubeadm.go:322] 
	I1005 21:27:44.908579 1481584 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 21:27:44.908590 1481584 kubeadm.go:322] 
	I1005 21:27:44.909611 1481584 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k9qxcl.06hyikjxq7tv3dc3 \
	I1005 21:27:44.909724 1481584 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d 
	I1005 21:27:44.912410 1481584 kubeadm.go:322] W1005 21:27:22.789831    1227 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1005 21:27:44.912625 1481584 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:27:44.912730 1481584 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:27:44.912852 1481584 kubeadm.go:322] W1005 21:27:31.400613    1227 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1005 21:27:44.912971 1481584 kubeadm.go:322] W1005 21:27:31.404904    1227 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1005 21:27:44.913003 1481584 cni.go:84] Creating CNI manager for ""
	I1005 21:27:44.913019 1481584 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:27:44.914869 1481584 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 21:27:44.916563 1481584 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:27:44.921464 1481584 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1005 21:27:44.921488 1481584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:27:44.944509 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:27:45.551211 1481584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:27:45.551360 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:45.551431 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=ingress-addon-legacy-570164 minikube.k8s.io/updated_at=2023_10_05T21_27_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:45.708267 1481584 ops.go:34] apiserver oom_adj: -16
	I1005 21:27:45.708372 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:45.839284 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:46.439155 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:46.938620 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:47.438678 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:47.939177 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:48.438884 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:48.938805 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:49.439372 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:49.939339 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:50.439566 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:50.938885 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:51.439288 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:51.939524 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:52.439575 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:52.939117 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:53.439335 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:53.939542 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:54.438663 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:54.939603 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:55.439550 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:55.939092 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:56.439556 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:56.939449 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:57.438635 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:57.938947 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:58.439465 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:58.939541 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:59.439500 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:27:59.938642 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:28:00.438580 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:28:00.938930 1481584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:28:01.085206 1481584 kubeadm.go:1081] duration metric: took 15.533895708s to wait for elevateKubeSystemPrivileges.
	I1005 21:28:01.085240 1481584 kubeadm.go:406] StartCluster complete in 38.435081307s
	I1005 21:28:01.085257 1481584 settings.go:142] acquiring lock: {Name:mk7dada861cf2ca4f44d224c602a8425f2d31baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:28:01.085317 1481584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:28:01.086033 1481584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/kubeconfig: {Name:mkcdb0cb77435bcc2d7e177116f1a594e64ff454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:28:01.086774 1481584 kapi.go:59] client config for ingress-addon-legacy-570164: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:28:01.088178 1481584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:28:01.088441 1481584 config.go:182] Loaded profile config "ingress-addon-legacy-570164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1005 21:28:01.088483 1481584 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 21:28:01.088552 1481584 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-570164"
	I1005 21:28:01.088567 1481584 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-570164"
	I1005 21:28:01.088607 1481584 host.go:66] Checking if "ingress-addon-legacy-570164" exists ...
	I1005 21:28:01.089078 1481584 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570164 --format={{.State.Status}}
	I1005 21:28:01.089790 1481584 cert_rotation.go:137] Starting client certificate rotation controller
	I1005 21:28:01.089834 1481584 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-570164"
	I1005 21:28:01.089851 1481584 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-570164"
	I1005 21:28:01.090127 1481584 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570164 --format={{.State.Status}}
	I1005 21:28:01.131997 1481584 kapi.go:59] client config for ingress-addon-legacy-570164: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:28:01.132277 1481584 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-570164"
	I1005 21:28:01.132314 1481584 host.go:66] Checking if "ingress-addon-legacy-570164" exists ...
	I1005 21:28:01.132807 1481584 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-570164 --format={{.State.Status}}
	I1005 21:28:01.157455 1481584 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:28:01.159758 1481584 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:28:01.159781 1481584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 21:28:01.159849 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:28:01.184396 1481584 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 21:28:01.184416 1481584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 21:28:01.184499 1481584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-570164
	I1005 21:28:01.203867 1481584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34092 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa Username:docker}
	I1005 21:28:01.228642 1481584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34092 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/ingress-addon-legacy-570164/id_rsa Username:docker}
	W1005 21:28:01.280759 1481584 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-570164" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1005 21:28:01.280791 1481584 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1005 21:28:01.280812 1481584 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:28:01.282809 1481584 out.go:177] * Verifying Kubernetes components...
	I1005 21:28:01.284618 1481584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:28:01.329981 1481584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 21:28:01.330625 1481584 kapi.go:59] client config for ingress-addon-legacy-570164: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:28:01.330954 1481584 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-570164" to be "Ready" ...
	I1005 21:28:01.395987 1481584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:28:01.415458 1481584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 21:28:01.713606 1481584 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1005 21:28:01.859496 1481584 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1005 21:28:01.861183 1481584 addons.go:502] enable addons completed in 772.687577ms: enabled=[storage-provisioner default-storageclass]
	I1005 21:28:03.395433 1481584 node_ready.go:58] node "ingress-addon-legacy-570164" has status "Ready":"False"
	I1005 21:28:05.895274 1481584 node_ready.go:58] node "ingress-addon-legacy-570164" has status "Ready":"False"
	I1005 21:28:07.895355 1481584 node_ready.go:58] node "ingress-addon-legacy-570164" has status "Ready":"False"
	I1005 21:28:08.395383 1481584 node_ready.go:49] node "ingress-addon-legacy-570164" has status "Ready":"True"
	I1005 21:28:08.395410 1481584 node_ready.go:38] duration metric: took 7.064428345s waiting for node "ingress-addon-legacy-570164" to be "Ready" ...
	I1005 21:28:08.395421 1481584 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:28:08.402736 1481584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4rspm" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:10.411529 1481584 pod_ready.go:102] pod "coredns-66bff467f8-4rspm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-05 21:28:01 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1005 21:28:12.913278 1481584 pod_ready.go:102] pod "coredns-66bff467f8-4rspm" in "kube-system" namespace has status "Ready":"False"
	I1005 21:28:14.913499 1481584 pod_ready.go:102] pod "coredns-66bff467f8-4rspm" in "kube-system" namespace has status "Ready":"False"
	I1005 21:28:17.413962 1481584 pod_ready.go:102] pod "coredns-66bff467f8-4rspm" in "kube-system" namespace has status "Ready":"False"
	I1005 21:28:19.913281 1481584 pod_ready.go:102] pod "coredns-66bff467f8-4rspm" in "kube-system" namespace has status "Ready":"False"
	I1005 21:28:21.913366 1481584 pod_ready.go:102] pod "coredns-66bff467f8-4rspm" in "kube-system" namespace has status "Ready":"False"
	I1005 21:28:23.913889 1481584 pod_ready.go:92] pod "coredns-66bff467f8-4rspm" in "kube-system" namespace has status "Ready":"True"
	I1005 21:28:23.913918 1481584 pod_ready.go:81] duration metric: took 15.511138851s waiting for pod "coredns-66bff467f8-4rspm" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.913931 1481584 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-9fg7c" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.919265 1481584 pod_ready.go:92] pod "coredns-66bff467f8-9fg7c" in "kube-system" namespace has status "Ready":"True"
	I1005 21:28:23.919290 1481584 pod_ready.go:81] duration metric: took 5.351396ms waiting for pod "coredns-66bff467f8-9fg7c" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.919302 1481584 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.930468 1481584 pod_ready.go:92] pod "etcd-ingress-addon-legacy-570164" in "kube-system" namespace has status "Ready":"True"
	I1005 21:28:23.930495 1481584 pod_ready.go:81] duration metric: took 11.18563ms waiting for pod "etcd-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.930511 1481584 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.935274 1481584 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-570164" in "kube-system" namespace has status "Ready":"True"
	I1005 21:28:23.935298 1481584 pod_ready.go:81] duration metric: took 4.779647ms waiting for pod "kube-apiserver-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.935310 1481584 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.939995 1481584 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-570164" in "kube-system" namespace has status "Ready":"True"
	I1005 21:28:23.940023 1481584 pod_ready.go:81] duration metric: took 4.705965ms waiting for pod "kube-controller-manager-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:23.940036 1481584 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blbsg" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:24.109488 1481584 request.go:629] Waited for 169.337268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-blbsg
	I1005 21:28:24.308946 1481584 request.go:629] Waited for 196.291695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-570164
	I1005 21:28:24.311723 1481584 pod_ready.go:92] pod "kube-proxy-blbsg" in "kube-system" namespace has status "Ready":"True"
	I1005 21:28:24.311747 1481584 pod_ready.go:81] duration metric: took 371.704352ms waiting for pod "kube-proxy-blbsg" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:24.311758 1481584 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:24.509201 1481584 request.go:629] Waited for 197.340758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-570164
	I1005 21:28:24.709556 1481584 request.go:629] Waited for 197.352746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-570164
	I1005 21:28:24.712338 1481584 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-570164" in "kube-system" namespace has status "Ready":"True"
	I1005 21:28:24.712362 1481584 pod_ready.go:81] duration metric: took 400.596087ms waiting for pod "kube-scheduler-ingress-addon-legacy-570164" in "kube-system" namespace to be "Ready" ...
	I1005 21:28:24.712375 1481584 pod_ready.go:38] duration metric: took 16.31694291s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:28:24.712391 1481584 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:28:24.712452 1481584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:28:24.725713 1481584 api_server.go:72] duration metric: took 23.444861796s to wait for apiserver process to appear ...
	I1005 21:28:24.725738 1481584 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:28:24.725755 1481584 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1005 21:28:24.734567 1481584 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1005 21:28:24.735438 1481584 api_server.go:141] control plane version: v1.18.20
	I1005 21:28:24.735467 1481584 api_server.go:131] duration metric: took 9.7184ms to wait for apiserver health ...
	I1005 21:28:24.735475 1481584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:28:24.908892 1481584 request.go:629] Waited for 173.351419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:28:24.915662 1481584 system_pods.go:59] 9 kube-system pods found
	I1005 21:28:24.915698 1481584 system_pods.go:61] "coredns-66bff467f8-4rspm" [ad5ea013-4d7d-49ed-beaa-a7d0593cf0fa] Running
	I1005 21:28:24.915705 1481584 system_pods.go:61] "coredns-66bff467f8-9fg7c" [0b457c91-d2d1-48eb-bdea-742f5237d5ed] Running
	I1005 21:28:24.915711 1481584 system_pods.go:61] "etcd-ingress-addon-legacy-570164" [1b556ecd-359f-44c2-8901-34af381d050c] Running
	I1005 21:28:24.915717 1481584 system_pods.go:61] "kindnet-5g5sr" [51837020-c9dc-416f-83df-f0bb4c95869e] Running
	I1005 21:28:24.915723 1481584 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-570164" [d8875d18-b1e6-430c-8efd-a22657e50b4b] Running
	I1005 21:28:24.915728 1481584 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-570164" [9cf8a8f0-a791-425b-ae18-c76b086512fc] Running
	I1005 21:28:24.915738 1481584 system_pods.go:61] "kube-proxy-blbsg" [32726c89-4002-4053-b46b-fe35ab196e63] Running
	I1005 21:28:24.915748 1481584 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-570164" [1c55e776-f5a8-4a3b-92ee-9249a01108eb] Running
	I1005 21:28:24.915757 1481584 system_pods.go:61] "storage-provisioner" [ae42c506-8c29-44aa-a0d4-1c9834b4035d] Running
	I1005 21:28:24.915764 1481584 system_pods.go:74] duration metric: took 180.282593ms to wait for pod list to return data ...
	I1005 21:28:24.915778 1481584 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:28:25.109149 1481584 request.go:629] Waited for 193.222935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1005 21:28:25.111931 1481584 default_sa.go:45] found service account: "default"
	I1005 21:28:25.111964 1481584 default_sa.go:55] duration metric: took 196.17816ms for default service account to be created ...
	I1005 21:28:25.111976 1481584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:28:25.309426 1481584 request.go:629] Waited for 197.376229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:28:25.315706 1481584 system_pods.go:86] 9 kube-system pods found
	I1005 21:28:25.315742 1481584 system_pods.go:89] "coredns-66bff467f8-4rspm" [ad5ea013-4d7d-49ed-beaa-a7d0593cf0fa] Running
	I1005 21:28:25.315750 1481584 system_pods.go:89] "coredns-66bff467f8-9fg7c" [0b457c91-d2d1-48eb-bdea-742f5237d5ed] Running
	I1005 21:28:25.315756 1481584 system_pods.go:89] "etcd-ingress-addon-legacy-570164" [1b556ecd-359f-44c2-8901-34af381d050c] Running
	I1005 21:28:25.315766 1481584 system_pods.go:89] "kindnet-5g5sr" [51837020-c9dc-416f-83df-f0bb4c95869e] Running
	I1005 21:28:25.315771 1481584 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-570164" [d8875d18-b1e6-430c-8efd-a22657e50b4b] Running
	I1005 21:28:25.315776 1481584 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-570164" [9cf8a8f0-a791-425b-ae18-c76b086512fc] Running
	I1005 21:28:25.315784 1481584 system_pods.go:89] "kube-proxy-blbsg" [32726c89-4002-4053-b46b-fe35ab196e63] Running
	I1005 21:28:25.315799 1481584 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-570164" [1c55e776-f5a8-4a3b-92ee-9249a01108eb] Running
	I1005 21:28:25.315806 1481584 system_pods.go:89] "storage-provisioner" [ae42c506-8c29-44aa-a0d4-1c9834b4035d] Running
	I1005 21:28:25.315814 1481584 system_pods.go:126] duration metric: took 203.832188ms to wait for k8s-apps to be running ...
	I1005 21:28:25.315827 1481584 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:28:25.315886 1481584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:28:25.330297 1481584 system_svc.go:56] duration metric: took 14.459065ms WaitForService to wait for kubelet.
	I1005 21:28:25.330337 1481584 kubeadm.go:581] duration metric: took 24.049481947s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:28:25.330359 1481584 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:28:25.508704 1481584 request.go:629] Waited for 178.27272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1005 21:28:25.511665 1481584 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:28:25.511702 1481584 node_conditions.go:123] node cpu capacity is 2
	I1005 21:28:25.511714 1481584 node_conditions.go:105] duration metric: took 181.34939ms to run NodePressure ...
	I1005 21:28:25.511725 1481584 start.go:228] waiting for startup goroutines ...
	I1005 21:28:25.511732 1481584 start.go:233] waiting for cluster config update ...
	I1005 21:28:25.511743 1481584 start.go:242] writing updated cluster config ...
	I1005 21:28:25.512023 1481584 ssh_runner.go:195] Run: rm -f paused
	I1005 21:28:25.573070 1481584 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1005 21:28:25.575294 1481584 out.go:177] 
	W1005 21:28:25.577479 1481584 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1005 21:28:25.579278 1481584 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1005 21:28:25.581106 1481584 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-570164" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.213720146Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=f24f7b4c-e1be-4a20-89c8-edb990db987a name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.214070488Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=f24f7b4c-e1be-4a20-89c8-edb990db987a name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.214759159Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-fcknb/hello-world-app" id=0163924d-294f-4c8c-8188-6ce53afeddaf name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.214845379Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.314439962Z" level=info msg="Created container d67804247e1e0eedbe927ddb637998c1fd3e2934fb8f92297ad8a6414e47fd57: default/hello-world-app-5f5d8b66bb-fcknb/hello-world-app" id=0163924d-294f-4c8c-8188-6ce53afeddaf name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.315082652Z" level=info msg="Starting container: d67804247e1e0eedbe927ddb637998c1fd3e2934fb8f92297ad8a6414e47fd57" id=5a5039c3-8d89-4342-9594-ca2a5db17485 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Oct 05 21:31:32 ingress-addon-legacy-570164 conmon[3805]: conmon d67804247e1e0eedbe92 <ninfo>: container 3816 exited with status 1
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.330227428Z" level=info msg="Started container" PID=3816 containerID=d67804247e1e0eedbe927ddb637998c1fd3e2934fb8f92297ad8a6414e47fd57 description=default/hello-world-app-5f5d8b66bb-fcknb/hello-world-app id=5a5039c3-8d89-4342-9594-ca2a5db17485 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=44bedbefaf3f528c056f2f3de8707bc091ca57aca216e95ab12bcad39d1fb8ef
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.658212940Z" level=info msg="Removing container: 1c74d5fa8b68f515f335130e9bdf65d2d7729552a48158c80a4af00a6c685fd3" id=1f14df85-8c03-49b0-af0b-c427aac96338 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 05 21:31:32 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:32.689459651Z" level=info msg="Removed container 1c74d5fa8b68f515f335130e9bdf65d2d7729552a48158c80a4af00a6c685fd3: default/hello-world-app-5f5d8b66bb-fcknb/hello-world-app" id=1f14df85-8c03-49b0-af0b-c427aac96338 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.584910592Z" level=warning msg="Stopping container b6b3538d7f748db632a3ffd74103580cd9ad988d8b6a08f1c58e90ade9ae55d4 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=8e57b8ec-dff5-4740-bb2a-15008060fc29 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 21:31:33 ingress-addon-legacy-570164 conmon[2823]: conmon b6b3538d7f748db632a3 <ninfo>: container 2834 exited with status 137
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.771755757Z" level=info msg="Stopped container b6b3538d7f748db632a3ffd74103580cd9ad988d8b6a08f1c58e90ade9ae55d4: ingress-nginx/ingress-nginx-controller-7fcf777cb7-7nw7x/controller" id=1de39eb0-15f2-4d0f-ae0b-bea1a204d399 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.773150970Z" level=info msg="Stopping pod sandbox: 493e6bc31049703b295584b13e35203deb5799f276b5672dec53d3dc630cc2dc" id=7ff2528e-97ed-4725-b107-88986109099e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.773950123Z" level=info msg="Stopped container b6b3538d7f748db632a3ffd74103580cd9ad988d8b6a08f1c58e90ade9ae55d4: ingress-nginx/ingress-nginx-controller-7fcf777cb7-7nw7x/controller" id=8e57b8ec-dff5-4740-bb2a-15008060fc29 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.774414672Z" level=info msg="Stopping pod sandbox: 493e6bc31049703b295584b13e35203deb5799f276b5672dec53d3dc630cc2dc" id=21c7aed0-4f3f-4e12-9d16-c5850a7c25bc name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.778496204Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-3GQLVYCT3YG7ZES5 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-EGXVSLXNVPOHMF3Q - [0:0]\n-X KUBE-HP-3GQLVYCT3YG7ZES5\n-X KUBE-HP-EGXVSLXNVPOHMF3Q\nCOMMIT\n"
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.781579052Z" level=info msg="Closing host port tcp:80"
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.781634601Z" level=info msg="Closing host port tcp:443"
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.783165937Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.783205174Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.783378712Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-7nw7x Namespace:ingress-nginx ID:493e6bc31049703b295584b13e35203deb5799f276b5672dec53d3dc630cc2dc UID:910939be-eee8-46dd-b2fd-4fdb50deff3b NetNS:/var/run/netns/9cab6769-c320-432b-95d3-8ea3bb854fae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.783523401Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-7nw7x from CNI network \"kindnet\" (type=ptp)"
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.814986103Z" level=info msg="Stopped pod sandbox: 493e6bc31049703b295584b13e35203deb5799f276b5672dec53d3dc630cc2dc" id=7ff2528e-97ed-4725-b107-88986109099e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 05 21:31:33 ingress-addon-legacy-570164 crio[894]: time="2023-10-05 21:31:33.815103428Z" level=info msg="Stopped pod sandbox (already stopped): 493e6bc31049703b295584b13e35203deb5799f276b5672dec53d3dc630cc2dc" id=21c7aed0-4f3f-4e12-9d16-c5850a7c25bc name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d67804247e1e0       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                   7 seconds ago       Exited              hello-world-app           2                   44bedbefaf3f5       hello-world-app-5f5d8b66bb-fcknb
	5f1944d9e5001       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                    2 minutes ago       Running             nginx                     0                   a9c832c5ba27c       nginx
	b6b3538d7f748       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   493e6bc310497       ingress-nginx-controller-7fcf777cb7-7nw7x
	5ffe1a7f5bea3       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   9060d1aee7100       ingress-nginx-admission-patch-fz72f
	188c402740586       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   12e781b5f6b06       ingress-nginx-admission-create-hnvm8
	07c23fe672f3f       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   79991a250af46       coredns-66bff467f8-4rspm
	072309ed89449       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   6cfd454c60693       storage-provisioner
	b4e40bef266d3       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   12f12d4865e17       coredns-66bff467f8-9fg7c
	15708ee98c106       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   e57647c512fb2       kindnet-5g5sr
	06ad1bf934470       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   1876278ab3bc4       kube-proxy-blbsg
	5088fa84e244d       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   3f6005ee8a23e       kube-scheduler-ingress-addon-legacy-570164
	19d479b03e359       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   068c55c10b2a4       etcd-ingress-addon-legacy-570164
	88513e87cbb91       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   7550aa8f1ecdf       kube-controller-manager-ingress-addon-legacy-570164
	de4ab074dbafd       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   07c822e2dcf7e       kube-apiserver-ingress-addon-legacy-570164
	
	* 
	* ==> coredns [07c23fe672f3f3b9eb0863fbf2ddd59fc2d73a70a1bd828f7aa2ef139eb93b0a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:54750 - 24505 "HINFO IN 768117150597302056.7952293481342377826. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012544724s
	
	* 
	* ==> coredns [b4e40bef266d3bcdfc9696476185a8f92fe747694874a9dfa85d4bf9c5847944] <==
	* [INFO] 10.244.0.6:49957 - 45372 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032722s
	[INFO] 10.244.0.6:49957 - 19614 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00078193s
	[INFO] 10.244.0.6:46203 - 14673 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002359018s
	[INFO] 10.244.0.6:46203 - 34046 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000193879s
	[INFO] 10.244.0.6:49957 - 8391 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001785177s
	[INFO] 10.244.0.6:49957 - 39083 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001272924s
	[INFO] 10.244.0.6:49957 - 41328 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041165s
	[INFO] 10.244.0.6:52336 - 64703 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083528s
	[INFO] 10.244.0.6:52336 - 2394 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120836s
	[INFO] 10.244.0.6:54728 - 55107 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069464s
	[INFO] 10.244.0.6:52336 - 25802 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048254s
	[INFO] 10.244.0.6:54728 - 15251 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031261s
	[INFO] 10.244.0.6:54728 - 10036 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034774s
	[INFO] 10.244.0.6:52336 - 27162 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030802s
	[INFO] 10.244.0.6:52336 - 40352 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067954s
	[INFO] 10.244.0.6:54728 - 60546 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000123438s
	[INFO] 10.244.0.6:52336 - 1242 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004274s
	[INFO] 10.244.0.6:54728 - 1185 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034642s
	[INFO] 10.244.0.6:54728 - 44801 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.001515049s
	[INFO] 10.244.0.6:52336 - 2788 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002017209s
	[INFO] 10.244.0.6:52336 - 56683 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001525805s
	[INFO] 10.244.0.6:54728 - 3057 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002299752s
	[INFO] 10.244.0.6:52336 - 56190 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000152394s
	[INFO] 10.244.0.6:54728 - 10078 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003719433s
	[INFO] 10.244.0.6:54728 - 22914 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000416557s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-570164
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-570164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=ingress-addon-legacy-570164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_27_45_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:27:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-570164
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:31:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:31:18 +0000   Thu, 05 Oct 2023 21:27:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:31:18 +0000   Thu, 05 Oct 2023 21:27:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:31:18 +0000   Thu, 05 Oct 2023 21:27:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:31:18 +0000   Thu, 05 Oct 2023 21:28:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-570164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 538734e1f7494493954dfc61b4cd38c9
	  System UUID:                1841608a-8d04-4747-86a8-042d86371bdb
	  Boot ID:                    619e9679-c801-4966-a4f0-8d68f85af04f
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-fcknb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-4rspm                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m39s
	  kube-system                 coredns-66bff467f8-9fg7c                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m39s
	  kube-system                 etcd-ingress-addon-legacy-570164                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kindnet-5g5sr                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m39s
	  kube-system                 kube-apiserver-ingress-addon-legacy-570164             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-570164    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-proxy-blbsg                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-scheduler-ingress-addon-legacy-570164             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             190Mi (2%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m6s (x4 over 4m6s)  kubelet     Node ingress-addon-legacy-570164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x5 over 4m6s)  kubelet     Node ingress-addon-legacy-570164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x4 over 4m6s)  kubelet     Node ingress-addon-legacy-570164 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m51s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s                kubelet     Node ingress-addon-legacy-570164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s                kubelet     Node ingress-addon-legacy-570164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s                kubelet     Node ingress-addon-legacy-570164 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m31s                kubelet     Node ingress-addon-legacy-570164 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001109] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000704] FS-Cache: N-cookie c=00000053 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001037] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +0.002754] FS-Cache: Duplicate cookie detected
	[  +0.000682] FS-Cache: O-cookie c=0000004d [p=0000004a fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000005885d3f4
	[  +0.001100] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000054 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000915] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000009c3c0e5e
	[  +0.001020] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +2.998730] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=0000004b [p=0000004a fl=226 nc=0 na=1]
	[  +0.000947] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000003ef1d116
	[  +0.001076] FS-Cache: O-key=[8] '6ed7c90000000000'
	[  +0.000702] FS-Cache: N-cookie c=00000056 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=0000000003824801
	[  +0.001036] FS-Cache: N-key=[8] '6ed7c90000000000'
	[  +0.302950] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000050 [p=0000004a fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=00000000b99a9016
	[  +0.001212] FS-Cache: O-key=[8] '74d7c90000000000'
	[  +0.000807] FS-Cache: N-cookie c=00000057 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000966] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001183] FS-Cache: N-key=[8] '74d7c90000000000'
	
	* 
	* ==> etcd [19d479b03e359e5c25bed6344c576a68ccc9d338e14fa27df0cea215579c36da] <==
	* raft2023/10/05 21:27:36 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/05 21:27:36 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/05 21:27:36 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/05 21:27:36 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-05 21:27:37.061350 W | auth: simple token is not cryptographically signed
	2023-10-05 21:27:37.101686 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-05 21:27:37.205629 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/05 21:27:37 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-05 21:27:37.306077 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-05 21:27:37.307739 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-05 21:27:37.308054 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-05 21:27:37.308350 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/10/05 21:27:37 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/05 21:27:37 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/05 21:27:37 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/05 21:27:37 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/05 21:27:37 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-05 21:27:37.481457 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-05 21:27:37.517417 I | embed: ready to serve client requests
	2023-10-05 21:27:37.521443 I | etcdserver: published {Name:ingress-addon-legacy-570164 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-05 21:27:37.527430 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-05 21:27:37.527553 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-05 21:27:37.529375 I | embed: ready to serve client requests
	2023-10-05 21:27:37.548760 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-05 21:27:37.609363 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  21:31:39 up  7:14,  0 users,  load average: 0.14, 0.78, 1.28
	Linux ingress-addon-legacy-570164 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [15708ee98c10661116fd7135f99f5b538441da161eb39d2c8e1cc9ce40fc4b8c] <==
	* I1005 21:29:35.588543       1 main.go:227] handling current node
	I1005 21:29:45.598967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:29:45.598997       1 main.go:227] handling current node
	I1005 21:29:55.602551       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:29:55.602581       1 main.go:227] handling current node
	I1005 21:30:05.606539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:30:05.606569       1 main.go:227] handling current node
	I1005 21:30:15.612016       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:30:15.612046       1 main.go:227] handling current node
	I1005 21:30:25.619826       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:30:25.619855       1 main.go:227] handling current node
	I1005 21:30:35.623148       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:30:35.623176       1 main.go:227] handling current node
	I1005 21:30:45.633122       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:30:45.633150       1 main.go:227] handling current node
	I1005 21:30:55.645961       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:30:55.645990       1 main.go:227] handling current node
	I1005 21:31:05.655743       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:31:05.655782       1 main.go:227] handling current node
	I1005 21:31:15.661064       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:31:15.661093       1 main.go:227] handling current node
	I1005 21:31:25.671696       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:31:25.671722       1 main.go:227] handling current node
	I1005 21:31:35.675520       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1005 21:31:35.675546       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [de4ab074dbafd537768df922663004cec4286ae23be8996935c95b3dae97a7c0] <==
	* E1005 21:27:41.636606       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1005 21:27:41.689655       1 cache.go:39] Caches are synced for autoregister controller
	I1005 21:27:41.697430       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 21:27:41.700389       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 21:27:41.700507       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1005 21:27:41.706543       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1005 21:27:42.487562       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1005 21:27:42.487677       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1005 21:27:42.500871       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1005 21:27:42.510866       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1005 21:27:42.510964       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1005 21:27:42.899874       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 21:27:42.953137       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1005 21:27:43.037603       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1005 21:27:43.038661       1 controller.go:609] quota admission added evaluator for: endpoints
	I1005 21:27:43.042466       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1005 21:27:43.957640       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1005 21:27:44.765149       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1005 21:27:44.894886       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1005 21:27:48.141930       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 21:28:00.905469       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1005 21:28:00.913801       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1005 21:28:26.476600       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1005 21:28:52.956109       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1005 21:31:30.667518       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x400d86fa28), encoder:(*versioning.codec)(0x4004e24820), buf:(*bytes.Buffer)(0x400e1755c0)})
	
	* 
	* ==> kube-controller-manager [88513e87cbb912363fb65a3a693b5ae31575dc1861d59115c7c400b99dbdaab8] <==
	* I1005 21:28:00.949320       1 shared_informer.go:230] Caches are synced for GC 
	I1005 21:28:00.950579       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I1005 21:28:00.951068       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1005 21:28:00.961662       1 shared_informer.go:230] Caches are synced for job 
	I1005 21:28:00.971425       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b02b760c-2b80-41a2-a38f-228d3961d21d", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9fg7c
	I1005 21:28:00.982289       1 shared_informer.go:230] Caches are synced for stateful set 
	I1005 21:28:01.045928       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b02b760c-2b80-41a2-a38f-228d3961d21d", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-4rspm
	I1005 21:28:01.055237       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1005 21:28:01.055273       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E1005 21:28:01.081976       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"6232860e-972f-453d-9c05-9e66ff4adc54", ResourceVersion:"239", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63832138065, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40018b1320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40018b1340)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40018b1360), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018b1380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018b13a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018b13c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018b13e0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018b1420)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000f364b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000f5d368), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400038d030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40012e63e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000f5d3b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1005 21:28:01.084018       1 shared_informer.go:230] Caches are synced for resource quota 
	I1005 21:28:01.089184       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1005 21:28:01.112264       1 shared_informer.go:230] Caches are synced for attach detach 
	I1005 21:28:01.147735       1 shared_informer.go:230] Caches are synced for resource quota 
	I1005 21:28:10.921890       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1005 21:28:26.446065       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5b412554-f4fe-41fa-b543-e44c71020d12", APIVersion:"apps/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1005 21:28:26.463578       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"915d2d36-b075-4b87-9377-86e93856e80e", APIVersion:"apps/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-7nw7x
	I1005 21:28:26.509294       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"8335469f-6f76-4feb-9b0c-b66268c397bd", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-hnvm8
	I1005 21:28:26.549556       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"111c6d35-30c3-4418-baef-d4c2469ff481", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-fz72f
	I1005 21:28:29.313494       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"8335469f-6f76-4feb-9b0c-b66268c397bd", APIVersion:"batch/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1005 21:28:29.336630       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"111c6d35-30c3-4418-baef-d4c2469ff481", APIVersion:"batch/v1", ResourceVersion:"515", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1005 21:31:14.123488       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"7ca3b30a-e047-49df-a0e5-7dd51a334b0c", APIVersion:"apps/v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1005 21:31:14.140256       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"6f34ad51-d3a2-4c3c-a4d8-2b30f902752f", APIVersion:"apps/v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-fcknb
	E1005 21:31:36.241803       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-vbdtz" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [06ad1bf93447026579e7c9673c511a73beae259745edde264c51dc23b40393f4] <==
	* W1005 21:28:02.363778       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1005 21:28:02.374937       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1005 21:28:02.375066       1 server_others.go:186] Using iptables Proxier.
	I1005 21:28:02.375454       1 server.go:583] Version: v1.18.20
	I1005 21:28:02.376620       1 config.go:133] Starting endpoints config controller
	I1005 21:28:02.376712       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1005 21:28:02.376789       1 config.go:315] Starting service config controller
	I1005 21:28:02.376815       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1005 21:28:02.477393       1 shared_informer.go:230] Caches are synced for service config 
	I1005 21:28:02.477470       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [5088fa84e244d03712dc93489f282b85a9cbceea4b2988d71bc8a848aaeb5126] <==
	* W1005 21:27:41.656279       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1005 21:27:41.700943       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1005 21:27:41.701043       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1005 21:27:41.703388       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:27:41.703514       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:27:41.703643       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1005 21:27:41.703709       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1005 21:27:41.708788       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 21:27:41.712566       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:27:41.712979       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:27:41.713165       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 21:27:41.713311       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 21:27:41.713518       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 21:27:41.713652       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 21:27:41.713787       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 21:27:41.713920       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 21:27:41.716252       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 21:27:41.716424       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1005 21:27:41.716642       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 21:27:42.606640       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 21:27:42.612622       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:27:42.711711       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1005 21:27:42.714877       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1005 21:27:43.303657       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1005 21:28:01.079615       1 factory.go:503] pod: kube-system/coredns-66bff467f8-9fg7c is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 05 21:31:18 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:18.631713    1637 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 225ebfe13bfa8393cf8aaf2cef473a863300b58e805c920361c95feace15b572
	Oct 05 21:31:18 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:18.632222    1637 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1c74d5fa8b68f515f335130e9bdf65d2d7729552a48158c80a4af00a6c685fd3
	Oct 05 21:31:18 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:18.632590    1637 pod_workers.go:191] Error syncing pod 03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9 ("hello-world-app-5f5d8b66bb-fcknb_default(03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fcknb_default(03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9)"
	Oct 05 21:31:19 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:19.635044    1637 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1c74d5fa8b68f515f335130e9bdf65d2d7729552a48158c80a4af00a6c685fd3
	Oct 05 21:31:19 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:19.635301    1637 pod_workers.go:191] Error syncing pod 03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9 ("hello-world-app-5f5d8b66bb-fcknb_default(03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fcknb_default(03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9)"
	Oct 05 21:31:21 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:21.213045    1637 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 21:31:21 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:21.213088    1637 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 21:31:21 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:21.213131    1637 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 05 21:31:21 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:21.213165    1637 pod_workers.go:191] Error syncing pod 8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af ("kube-ingress-dns-minikube_kube-system(8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 05 21:31:30 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:30.184340    1637 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-x88gk" (UniqueName: "kubernetes.io/secret/8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af-minikube-ingress-dns-token-x88gk") pod "8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af" (UID: "8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af")
	Oct 05 21:31:30 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:30.192394    1637 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af-minikube-ingress-dns-token-x88gk" (OuterVolumeSpecName: "minikube-ingress-dns-token-x88gk") pod "8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af" (UID: "8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af"). InnerVolumeSpecName "minikube-ingress-dns-token-x88gk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:31:30 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:30.284758    1637 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-x88gk" (UniqueName: "kubernetes.io/secret/8ecca7e4-d28c-4bca-a0e2-e7c80e17c5af-minikube-ingress-dns-token-x88gk") on node "ingress-addon-legacy-570164" DevicePath ""
	Oct 05 21:31:31 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:31.571261    1637 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-7nw7x.178b537f77a2c7cd", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-7nw7x", UID:"910939be-eee8-46dd-b2fd-4fdb50deff3b", APIVersion:"v1", ResourceVersion:"495", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-570164"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13fe8ace1de89cd, ext:226849458984, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13fe8ace1de89cd, ext:226849458984, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-7nw7x.178b537f77a2c7cd" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 05 21:31:31 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:31.581965    1637 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-7nw7x.178b537f77a2c7cd", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-7nw7x", UID:"910939be-eee8-46dd-b2fd-4fdb50deff3b", APIVersion:"v1", ResourceVersion:"495", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-570164"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13fe8ace1de89cd, ext:226849458984, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13fe8ace2665aea, ext:226858359877, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-7nw7x.178b537f77a2c7cd" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 05 21:31:32 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:32.212261    1637 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1c74d5fa8b68f515f335130e9bdf65d2d7729552a48158c80a4af00a6c685fd3
	Oct 05 21:31:32 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:32.655881    1637 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1c74d5fa8b68f515f335130e9bdf65d2d7729552a48158c80a4af00a6c685fd3
	Oct 05 21:31:32 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:32.656080    1637 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d67804247e1e0eedbe927ddb637998c1fd3e2934fb8f92297ad8a6414e47fd57
	Oct 05 21:31:32 ingress-addon-legacy-570164 kubelet[1637]: E1005 21:31:32.656328    1637 pod_workers.go:191] Error syncing pod 03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9 ("hello-world-app-5f5d8b66bb-fcknb_default(03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-fcknb_default(03ef1677-1d8f-45ad-b86e-f65e0a6fa3e9)"
	Oct 05 21:31:34 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:34.193996    1637 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-4r26h" (UniqueName: "kubernetes.io/secret/910939be-eee8-46dd-b2fd-4fdb50deff3b-ingress-nginx-token-4r26h") pod "910939be-eee8-46dd-b2fd-4fdb50deff3b" (UID: "910939be-eee8-46dd-b2fd-4fdb50deff3b")
	Oct 05 21:31:34 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:34.194043    1637 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/910939be-eee8-46dd-b2fd-4fdb50deff3b-webhook-cert") pod "910939be-eee8-46dd-b2fd-4fdb50deff3b" (UID: "910939be-eee8-46dd-b2fd-4fdb50deff3b")
	Oct 05 21:31:34 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:34.199364    1637 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/910939be-eee8-46dd-b2fd-4fdb50deff3b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "910939be-eee8-46dd-b2fd-4fdb50deff3b" (UID: "910939be-eee8-46dd-b2fd-4fdb50deff3b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:31:34 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:34.202236    1637 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/910939be-eee8-46dd-b2fd-4fdb50deff3b-ingress-nginx-token-4r26h" (OuterVolumeSpecName: "ingress-nginx-token-4r26h") pod "910939be-eee8-46dd-b2fd-4fdb50deff3b" (UID: "910939be-eee8-46dd-b2fd-4fdb50deff3b"). InnerVolumeSpecName "ingress-nginx-token-4r26h". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 05 21:31:34 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:34.294380    1637 reconciler.go:319] Volume detached for volume "ingress-nginx-token-4r26h" (UniqueName: "kubernetes.io/secret/910939be-eee8-46dd-b2fd-4fdb50deff3b-ingress-nginx-token-4r26h") on node "ingress-addon-legacy-570164" DevicePath ""
	Oct 05 21:31:34 ingress-addon-legacy-570164 kubelet[1637]: I1005 21:31:34.294436    1637 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/910939be-eee8-46dd-b2fd-4fdb50deff3b-webhook-cert") on node "ingress-addon-legacy-570164" DevicePath ""
	Oct 05 21:31:34 ingress-addon-legacy-570164 kubelet[1637]: W1005 21:31:34.661128    1637 pod_container_deletor.go:77] Container "493e6bc31049703b295584b13e35203deb5799f276b5672dec53d3dc630cc2dc" not found in pod's containers
	
	* 
	* ==> storage-provisioner [072309ed89449d121e27e0c5e5a96ee27458b357d4fa4e6ea1713984a5c204c8] <==
	* I1005 21:28:10.992420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1005 21:28:11.006594       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1005 21:28:11.006773       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1005 21:28:11.014952       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1005 21:28:11.015243       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-570164_7030d62a-e409-4c5d-bb48-66d8db0e4a26!
	I1005 21:28:11.015373       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"624c9801-0a20-4642-8206-5a509dcf150e", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-570164_7030d62a-e409-4c5d-bb48-66d8db0e4a26 became leader
	I1005 21:28:11.118420       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-570164_7030d62a-e409-4c5d-bb48-66d8db0e4a26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-570164 -n ingress-addon-legacy-570164
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-570164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-hrkj8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-hrkj8 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-hrkj8 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (233.544401ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-hrkj8): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-ztvv9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-ztvv9 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-ztvv9 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (250.276261ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-ztvv9): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-814558
helpers_test.go:235: (dbg) docker inspect multinode-814558:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347",
	        "Created": "2023-10-05T21:37:59.089753941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1518677,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:37:59.428357552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/hostname",
	        "HostsPath": "/var/lib/docker/containers/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/hosts",
	        "LogPath": "/var/lib/docker/containers/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347-json.log",
	        "Name": "/multinode-814558",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-814558:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-814558",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/eee7a0f7550149e5ef51f4ead3bdcab1be5ab226c4aa52a3ce4e822c135b0b5a-init/diff:/var/lib/docker/overlay2/d90b9e2f667f252141d832d5a382f20f93e3e59a1248437095891beeaafeffd3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee7a0f7550149e5ef51f4ead3bdcab1be5ab226c4aa52a3ce4e822c135b0b5a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee7a0f7550149e5ef51f4ead3bdcab1be5ab226c4aa52a3ce4e822c135b0b5a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee7a0f7550149e5ef51f4ead3bdcab1be5ab226c4aa52a3ce4e822c135b0b5a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-814558",
	                "Source": "/var/lib/docker/volumes/multinode-814558/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-814558",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-814558",
	                "name.minikube.sigs.k8s.io": "multinode-814558",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7de42202748afef2110edcc13d5cc708c013454a2b9679ab862e018689b2df32",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34152"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34149"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7de42202748a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-814558": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "058ddd99bc47",
	                        "multinode-814558"
	                    ],
	                    "NetworkID": "f25a4bc44290edc86b04659f9e49367c0d14cdc3d71d672ef8d3d1b7ad21108a",
	                    "EndpointID": "3256aabd18258765abe10c10eec52ab97ae921a688ae3760a3007438b6f39677",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-814558 -n multinode-814558
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-814558 logs -n 25: (1.825582205s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-257908                           | mount-start-2-257908 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-257908 ssh -- ls                    | mount-start-2-257908 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-255996                           | mount-start-1-255996 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-257908 ssh -- ls                    | mount-start-2-257908 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-257908                           | mount-start-2-257908 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	| start   | -p mount-start-2-257908                           | mount-start-2-257908 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	| ssh     | mount-start-2-257908 ssh -- ls                    | mount-start-2-257908 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-257908                           | mount-start-2-257908 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	| delete  | -p mount-start-1-255996                           | mount-start-1-255996 | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:37 UTC |
	| start   | -p multinode-814558                               | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:37 UTC | 05 Oct 23 21:39 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- apply -f                   | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- rollout                    | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- get pods -o                | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- get pods -o                | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-hrkj8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-ztvv9 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-hrkj8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-ztvv9 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-hrkj8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-ztvv9 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- get pods -o                | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-hrkj8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC |                     |
	|         | busybox-5bc68d56bd-hrkj8 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC | 05 Oct 23 21:40 UTC |
	|         | busybox-5bc68d56bd-ztvv9                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-814558 -- exec                       | multinode-814558     | jenkins | v1.31.2 | 05 Oct 23 21:40 UTC |                     |
	|         | busybox-5bc68d56bd-ztvv9 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:37:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:37:53.618063 1518222 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:37:53.618235 1518222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:37:53.618244 1518222 out.go:309] Setting ErrFile to fd 2...
	I1005 21:37:53.618250 1518222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:37:53.618532 1518222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:37:53.618925 1518222 out.go:303] Setting JSON to false
	I1005 21:37:53.619944 1518222 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":26421,"bootTime":1696515453,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:37:53.620017 1518222 start.go:138] virtualization:  
	I1005 21:37:53.622385 1518222 out.go:177] * [multinode-814558] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:37:53.624639 1518222 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:37:53.626231 1518222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:37:53.624872 1518222 notify.go:220] Checking for updates...
	I1005 21:37:53.629563 1518222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:37:53.631353 1518222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:37:53.633202 1518222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:37:53.634824 1518222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:37:53.636910 1518222 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:37:53.661708 1518222 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:37:53.661814 1518222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:37:53.745151 1518222 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-05 21:37:53.735032325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:37:53.745261 1518222 docker.go:294] overlay module found
	I1005 21:37:53.747627 1518222 out.go:177] * Using the docker driver based on user configuration
	I1005 21:37:53.749473 1518222 start.go:298] selected driver: docker
	I1005 21:37:53.749493 1518222 start.go:902] validating driver "docker" against <nil>
	I1005 21:37:53.749510 1518222 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:37:53.750160 1518222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:37:53.822569 1518222 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-05 21:37:53.812277614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:37:53.822755 1518222 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:37:53.822981 1518222 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1005 21:37:53.825028 1518222 out.go:177] * Using Docker driver with root privileges
	I1005 21:37:53.826934 1518222 cni.go:84] Creating CNI manager for ""
	I1005 21:37:53.826958 1518222 cni.go:136] 0 nodes found, recommending kindnet
	I1005 21:37:53.826971 1518222 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:37:53.826986 1518222 start_flags.go:321] config:
	{Name:multinode-814558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-814558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:37:53.829143 1518222 out.go:177] * Starting control plane node multinode-814558 in cluster multinode-814558
	I1005 21:37:53.830772 1518222 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:37:53.832427 1518222 out.go:177] * Pulling base image ...
	I1005 21:37:53.834298 1518222 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:37:53.834359 1518222 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1005 21:37:53.834373 1518222 cache.go:57] Caching tarball of preloaded images
	I1005 21:37:53.834371 1518222 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:37:53.834449 1518222 preload.go:174] Found /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1005 21:37:53.834459 1518222 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 21:37:53.834824 1518222 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/config.json ...
	I1005 21:37:53.834855 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/config.json: {Name:mkc23c4ad507f07e2947b62d5fb3272435a3c3af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:37:53.852199 1518222 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 21:37:53.852223 1518222 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 21:37:53.852246 1518222 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:37:53.852297 1518222 start.go:365] acquiring machines lock for multinode-814558: {Name:mk466d2798fc994459a2cbfa4348823a5bda9993 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:37:53.852419 1518222 start.go:369] acquired machines lock for "multinode-814558" in 102.457µs
	I1005 21:37:53.852447 1518222 start.go:93] Provisioning new machine with config: &{Name:multinode-814558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-814558 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:37:53.852537 1518222 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:37:53.855052 1518222 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1005 21:37:53.855323 1518222 start.go:159] libmachine.API.Create for "multinode-814558" (driver="docker")
	I1005 21:37:53.855352 1518222 client.go:168] LocalClient.Create starting
	I1005 21:37:53.855453 1518222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem
	I1005 21:37:53.855495 1518222 main.go:141] libmachine: Decoding PEM data...
	I1005 21:37:53.855516 1518222 main.go:141] libmachine: Parsing certificate...
	I1005 21:37:53.855607 1518222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem
	I1005 21:37:53.855638 1518222 main.go:141] libmachine: Decoding PEM data...
	I1005 21:37:53.855654 1518222 main.go:141] libmachine: Parsing certificate...
	I1005 21:37:53.856041 1518222 cli_runner.go:164] Run: docker network inspect multinode-814558 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:37:53.873303 1518222 cli_runner.go:211] docker network inspect multinode-814558 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:37:53.873415 1518222 network_create.go:281] running [docker network inspect multinode-814558] to gather additional debugging logs...
	I1005 21:37:53.873437 1518222 cli_runner.go:164] Run: docker network inspect multinode-814558
	W1005 21:37:53.890739 1518222 cli_runner.go:211] docker network inspect multinode-814558 returned with exit code 1
	I1005 21:37:53.890774 1518222 network_create.go:284] error running [docker network inspect multinode-814558]: docker network inspect multinode-814558: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-814558 not found
	I1005 21:37:53.890788 1518222 network_create.go:286] output of [docker network inspect multinode-814558]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-814558 not found
	
	** /stderr **
	I1005 21:37:53.890898 1518222 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:37:53.910851 1518222 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d16b9e9a692c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:9e:45:13} reservation:<nil>}
	I1005 21:37:53.911237 1518222 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000d57be0}
	I1005 21:37:53.911264 1518222 network_create.go:124] attempt to create docker network multinode-814558 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1005 21:37:53.911324 1518222 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-814558 multinode-814558
	I1005 21:37:53.984083 1518222 network_create.go:108] docker network multinode-814558 192.168.58.0/24 created
	I1005 21:37:53.984115 1518222 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-814558" container
	I1005 21:37:53.984185 1518222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:37:54.006064 1518222 cli_runner.go:164] Run: docker volume create multinode-814558 --label name.minikube.sigs.k8s.io=multinode-814558 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:37:54.027741 1518222 oci.go:103] Successfully created a docker volume multinode-814558
	I1005 21:37:54.027871 1518222 cli_runner.go:164] Run: docker run --rm --name multinode-814558-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814558 --entrypoint /usr/bin/test -v multinode-814558:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:37:54.629369 1518222 oci.go:107] Successfully prepared a docker volume multinode-814558
	I1005 21:37:54.629417 1518222 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:37:54.629437 1518222 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:37:54.629535 1518222 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-814558:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:37:59.009695 1518222 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-814558:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.380111396s)
	I1005 21:37:59.009730 1518222 kic.go:199] duration metric: took 4.380289 seconds to extract preloaded images to volume
	W1005 21:37:59.009869 1518222 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:37:59.009974 1518222 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:37:59.072854 1518222 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-814558 --name multinode-814558 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814558 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-814558 --network multinode-814558 --ip 192.168.58.2 --volume multinode-814558:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:37:59.436921 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Running}}
	I1005 21:37:59.468298 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:37:59.496278 1518222 cli_runner.go:164] Run: docker exec multinode-814558 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:37:59.609328 1518222 oci.go:144] the created container "multinode-814558" has a running status.
	I1005 21:37:59.609402 1518222 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa...
	I1005 21:37:59.959998 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 21:37:59.960045 1518222 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:37:59.982451 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:38:00.007605 1518222 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:38:00.007641 1518222 kic_runner.go:114] Args: [docker exec --privileged multinode-814558 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:38:00.151234 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:38:00.241218 1518222 machine.go:88] provisioning docker machine ...
	I1005 21:38:00.241254 1518222 ubuntu.go:169] provisioning hostname "multinode-814558"
	I1005 21:38:00.241328 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:00.287108 1518222 main.go:141] libmachine: Using SSH client type: native
	I1005 21:38:00.287571 1518222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34152 <nil> <nil>}
	I1005 21:38:00.287587 1518222 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-814558 && echo "multinode-814558" | sudo tee /etc/hostname
	I1005 21:38:00.288390 1518222 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43962->127.0.0.1:34152: read: connection reset by peer
	I1005 21:38:03.436728 1518222 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-814558
	
	I1005 21:38:03.436823 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:03.459282 1518222 main.go:141] libmachine: Using SSH client type: native
	I1005 21:38:03.459691 1518222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34152 <nil> <nil>}
	I1005 21:38:03.459714 1518222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-814558' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-814558/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-814558' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:38:03.590724 1518222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:38:03.590749 1518222 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:38:03.590768 1518222 ubuntu.go:177] setting up certificates
	I1005 21:38:03.590777 1518222 provision.go:83] configureAuth start
	I1005 21:38:03.590836 1518222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558
	I1005 21:38:03.612250 1518222 provision.go:138] copyHostCerts
	I1005 21:38:03.612292 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:38:03.612331 1518222 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 21:38:03.612338 1518222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:38:03.612416 1518222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:38:03.612493 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:38:03.612511 1518222 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 21:38:03.612516 1518222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:38:03.612542 1518222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:38:03.612579 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:38:03.612642 1518222 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 21:38:03.612649 1518222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:38:03.612677 1518222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:38:03.612724 1518222 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.multinode-814558 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-814558]
	I1005 21:38:04.051557 1518222 provision.go:172] copyRemoteCerts
	I1005 21:38:04.054200 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:38:04.054300 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:04.072947 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:38:04.172183 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 21:38:04.172244 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:38:04.204253 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 21:38:04.204311 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1005 21:38:04.237852 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 21:38:04.237915 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 21:38:04.267557 1518222 provision.go:86] duration metric: configureAuth took 676.766699ms
	I1005 21:38:04.267588 1518222 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:38:04.267784 1518222 config.go:182] Loaded profile config "multinode-814558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:38:04.267897 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:04.286141 1518222 main.go:141] libmachine: Using SSH client type: native
	I1005 21:38:04.286569 1518222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34152 <nil> <nil>}
	I1005 21:38:04.286590 1518222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:38:04.536491 1518222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:38:04.536514 1518222 machine.go:91] provisioned docker machine in 4.295273407s
	I1005 21:38:04.536524 1518222 client.go:171] LocalClient.Create took 10.681161654s
	I1005 21:38:04.536541 1518222 start.go:167] duration metric: libmachine.API.Create for "multinode-814558" took 10.681220543s
	I1005 21:38:04.536560 1518222 start.go:300] post-start starting for "multinode-814558" (driver="docker")
	I1005 21:38:04.536570 1518222 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:38:04.536649 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:38:04.536696 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:04.554726 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:38:04.656454 1518222 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:38:04.660545 1518222 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1005 21:38:04.660564 1518222 command_runner.go:130] > NAME="Ubuntu"
	I1005 21:38:04.660572 1518222 command_runner.go:130] > VERSION_ID="22.04"
	I1005 21:38:04.660578 1518222 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1005 21:38:04.660584 1518222 command_runner.go:130] > VERSION_CODENAME=jammy
	I1005 21:38:04.660588 1518222 command_runner.go:130] > ID=ubuntu
	I1005 21:38:04.660593 1518222 command_runner.go:130] > ID_LIKE=debian
	I1005 21:38:04.660599 1518222 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1005 21:38:04.660605 1518222 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1005 21:38:04.660613 1518222 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1005 21:38:04.660621 1518222 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1005 21:38:04.660626 1518222 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1005 21:38:04.660672 1518222 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:38:04.660696 1518222 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:38:04.660706 1518222 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:38:04.660714 1518222 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:38:04.660724 1518222 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:38:04.660789 1518222 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:38:04.660879 1518222 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 21:38:04.660887 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> /etc/ssl/certs/14537862.pem
	I1005 21:38:04.660987 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:38:04.671621 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:38:04.700490 1518222 start.go:303] post-start completed in 163.914368ms
	I1005 21:38:04.700871 1518222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558
	I1005 21:38:04.718141 1518222 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/config.json ...
	I1005 21:38:04.718437 1518222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:38:04.718485 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:04.736387 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:38:04.827945 1518222 command_runner.go:130] > 17%!
	(MISSING)I1005 21:38:04.828019 1518222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:38:04.833954 1518222 command_runner.go:130] > 163G
	I1005 21:38:04.834364 1518222 start.go:128] duration metric: createHost completed in 10.981815091s
	I1005 21:38:04.834385 1518222 start.go:83] releasing machines lock for "multinode-814558", held for 10.981956868s
	I1005 21:38:04.834458 1518222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558
	I1005 21:38:04.852071 1518222 ssh_runner.go:195] Run: cat /version.json
	I1005 21:38:04.852127 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:04.852188 1518222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:38:04.852241 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:04.876755 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:38:04.892655 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:38:05.114495 1518222 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1005 21:38:05.114599 1518222 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1696360059-17345", "minikube_version": "v1.31.2", "commit": "3da829742e24bcb762d99c062a7806436d0f28e3"}
	I1005 21:38:05.114750 1518222 ssh_runner.go:195] Run: systemctl --version
	I1005 21:38:05.120622 1518222 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1005 21:38:05.120662 1518222 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1005 21:38:05.120731 1518222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 21:38:05.270939 1518222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:38:05.276168 1518222 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1005 21:38:05.276195 1518222 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1005 21:38:05.276203 1518222 command_runner.go:130] > Device: 3ah/58d	Inode: 5449409     Links: 1
	I1005 21:38:05.276211 1518222 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 21:38:05.276218 1518222 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1005 21:38:05.276225 1518222 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1005 21:38:05.276232 1518222 command_runner.go:130] > Change: 2023-10-05 21:15:15.895759670 +0000
	I1005 21:38:05.276241 1518222 command_runner.go:130] >  Birth: 2023-10-05 21:15:15.895759670 +0000
	I1005 21:38:05.276611 1518222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:38:05.302159 1518222 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:38:05.302237 1518222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:38:05.341821 1518222 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1005 21:38:05.341856 1518222 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 21:38:05.341865 1518222 start.go:469] detecting cgroup driver to use...
	I1005 21:38:05.341897 1518222 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:38:05.341948 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 21:38:05.361809 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 21:38:05.375872 1518222 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:38:05.375993 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:38:05.392834 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:38:05.411023 1518222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:38:05.520153 1518222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:38:05.624230 1518222 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1005 21:38:05.624718 1518222 docker.go:213] disabling docker service ...
	I1005 21:38:05.624829 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:38:05.648244 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:38:05.662867 1518222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:38:05.763956 1518222 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1005 21:38:05.764111 1518222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:38:05.868091 1518222 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1005 21:38:05.868203 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:38:05.881486 1518222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:38:05.900966 1518222 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1005 21:38:05.902615 1518222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1005 21:38:05.902720 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:38:05.914759 1518222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 21:38:05.914833 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:38:05.927984 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:38:05.940435 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:38:05.952974 1518222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:38:05.964476 1518222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:38:05.975010 1518222 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1005 21:38:05.975104 1518222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:38:05.985302 1518222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:38:06.089410 1518222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 21:38:06.227651 1518222 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 21:38:06.227724 1518222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 21:38:06.232529 1518222 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1005 21:38:06.232550 1518222 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1005 21:38:06.232558 1518222 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1005 21:38:06.232566 1518222 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 21:38:06.232573 1518222 command_runner.go:130] > Access: 2023-10-05 21:38:06.212055093 +0000
	I1005 21:38:06.232580 1518222 command_runner.go:130] > Modify: 2023-10-05 21:38:06.212055093 +0000
	I1005 21:38:06.232586 1518222 command_runner.go:130] > Change: 2023-10-05 21:38:06.212055093 +0000
	I1005 21:38:06.232593 1518222 command_runner.go:130] >  Birth: -
	I1005 21:38:06.232657 1518222 start.go:537] Will wait 60s for crictl version
	I1005 21:38:06.232717 1518222 ssh_runner.go:195] Run: which crictl
	I1005 21:38:06.236862 1518222 command_runner.go:130] > /usr/bin/crictl
	I1005 21:38:06.237118 1518222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:38:06.279599 1518222 command_runner.go:130] > Version:  0.1.0
	I1005 21:38:06.279626 1518222 command_runner.go:130] > RuntimeName:  cri-o
	I1005 21:38:06.279633 1518222 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1005 21:38:06.279639 1518222 command_runner.go:130] > RuntimeApiVersion:  v1
	I1005 21:38:06.282420 1518222 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 21:38:06.282512 1518222 ssh_runner.go:195] Run: crio --version
	I1005 21:38:06.327546 1518222 command_runner.go:130] > crio version 1.24.6
	I1005 21:38:06.327565 1518222 command_runner.go:130] > Version:          1.24.6
	I1005 21:38:06.327573 1518222 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 21:38:06.327579 1518222 command_runner.go:130] > GitTreeState:     clean
	I1005 21:38:06.327586 1518222 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 21:38:06.327592 1518222 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 21:38:06.327597 1518222 command_runner.go:130] > Compiler:         gc
	I1005 21:38:06.327603 1518222 command_runner.go:130] > Platform:         linux/arm64
	I1005 21:38:06.327609 1518222 command_runner.go:130] > Linkmode:         dynamic
	I1005 21:38:06.327622 1518222 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 21:38:06.327629 1518222 command_runner.go:130] > SeccompEnabled:   true
	I1005 21:38:06.327638 1518222 command_runner.go:130] > AppArmorEnabled:  false
	I1005 21:38:06.329694 1518222 ssh_runner.go:195] Run: crio --version
	I1005 21:38:06.374711 1518222 command_runner.go:130] > crio version 1.24.6
	I1005 21:38:06.374735 1518222 command_runner.go:130] > Version:          1.24.6
	I1005 21:38:06.374749 1518222 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 21:38:06.374754 1518222 command_runner.go:130] > GitTreeState:     clean
	I1005 21:38:06.374761 1518222 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 21:38:06.374767 1518222 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 21:38:06.374773 1518222 command_runner.go:130] > Compiler:         gc
	I1005 21:38:06.374778 1518222 command_runner.go:130] > Platform:         linux/arm64
	I1005 21:38:06.374785 1518222 command_runner.go:130] > Linkmode:         dynamic
	I1005 21:38:06.374795 1518222 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 21:38:06.374803 1518222 command_runner.go:130] > SeccompEnabled:   true
	I1005 21:38:06.374808 1518222 command_runner.go:130] > AppArmorEnabled:  false
	I1005 21:38:06.378136 1518222 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1005 21:38:06.380249 1518222 cli_runner.go:164] Run: docker network inspect multinode-814558 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:38:06.400954 1518222 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1005 21:38:06.405738 1518222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:38:06.419519 1518222 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:38:06.419593 1518222 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:38:06.483280 1518222 command_runner.go:130] > {
	I1005 21:38:06.483298 1518222 command_runner.go:130] >   "images": [
	I1005 21:38:06.483303 1518222 command_runner.go:130] >     {
	I1005 21:38:06.483313 1518222 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1005 21:38:06.483319 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.483327 1518222 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1005 21:38:06.483333 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483339 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.483351 1518222 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1005 21:38:06.483362 1518222 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1005 21:38:06.483373 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483378 1518222 command_runner.go:130] >       "size": "60867618",
	I1005 21:38:06.483383 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.483388 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.483395 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.483403 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.483408 1518222 command_runner.go:130] >     },
	I1005 21:38:06.483413 1518222 command_runner.go:130] >     {
	I1005 21:38:06.483421 1518222 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1005 21:38:06.483430 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.483437 1518222 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1005 21:38:06.483441 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483448 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.483462 1518222 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1005 21:38:06.483476 1518222 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1005 21:38:06.483481 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483496 1518222 command_runner.go:130] >       "size": "29037500",
	I1005 21:38:06.483501 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.483506 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.483511 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.483522 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.483527 1518222 command_runner.go:130] >     },
	I1005 21:38:06.483531 1518222 command_runner.go:130] >     {
	I1005 21:38:06.483540 1518222 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1005 21:38:06.483545 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.483559 1518222 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1005 21:38:06.483567 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483572 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.483585 1518222 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1005 21:38:06.483597 1518222 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1005 21:38:06.483605 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483610 1518222 command_runner.go:130] >       "size": "51393451",
	I1005 21:38:06.483617 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.483625 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.483631 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.483639 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.483644 1518222 command_runner.go:130] >     },
	I1005 21:38:06.483648 1518222 command_runner.go:130] >     {
	I1005 21:38:06.483661 1518222 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1005 21:38:06.483667 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.483676 1518222 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1005 21:38:06.483685 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483690 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.483702 1518222 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1005 21:38:06.483714 1518222 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1005 21:38:06.483722 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483730 1518222 command_runner.go:130] >       "size": "182203183",
	I1005 21:38:06.483735 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.483744 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.483749 1518222 command_runner.go:130] >       },
	I1005 21:38:06.483757 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.483762 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.483768 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.483773 1518222 command_runner.go:130] >     },
	I1005 21:38:06.483780 1518222 command_runner.go:130] >     {
	I1005 21:38:06.483789 1518222 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1005 21:38:06.483797 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.483804 1518222 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1005 21:38:06.483808 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483815 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.483825 1518222 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1005 21:38:06.483835 1518222 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1005 21:38:06.483843 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483848 1518222 command_runner.go:130] >       "size": "121054158",
	I1005 21:38:06.483853 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.483862 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.483866 1518222 command_runner.go:130] >       },
	I1005 21:38:06.483875 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.483881 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.483889 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.483894 1518222 command_runner.go:130] >     },
	I1005 21:38:06.483898 1518222 command_runner.go:130] >     {
	I1005 21:38:06.483906 1518222 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1005 21:38:06.483914 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.483925 1518222 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1005 21:38:06.483930 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483941 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.483951 1518222 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1005 21:38:06.483964 1518222 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1005 21:38:06.483972 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.483978 1518222 command_runner.go:130] >       "size": "117187380",
	I1005 21:38:06.483986 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.483992 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.483996 1518222 command_runner.go:130] >       },
	I1005 21:38:06.484003 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.484008 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.484018 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.484023 1518222 command_runner.go:130] >     },
	I1005 21:38:06.484031 1518222 command_runner.go:130] >     {
	I1005 21:38:06.484039 1518222 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1005 21:38:06.484048 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.484054 1518222 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1005 21:38:06.484064 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.484069 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.484079 1518222 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1005 21:38:06.484092 1518222 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1005 21:38:06.484100 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.484106 1518222 command_runner.go:130] >       "size": "69926807",
	I1005 21:38:06.484114 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.484119 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.484124 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.484130 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.484138 1518222 command_runner.go:130] >     },
	I1005 21:38:06.484143 1518222 command_runner.go:130] >     {
	I1005 21:38:06.484157 1518222 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1005 21:38:06.484164 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.484171 1518222 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1005 21:38:06.484181 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.484187 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.484229 1518222 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1005 21:38:06.484255 1518222 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1005 21:38:06.484260 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.484265 1518222 command_runner.go:130] >       "size": "59188020",
	I1005 21:38:06.484270 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.484275 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.484279 1518222 command_runner.go:130] >       },
	I1005 21:38:06.484284 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.484289 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.484294 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.484299 1518222 command_runner.go:130] >     },
	I1005 21:38:06.484305 1518222 command_runner.go:130] >     {
	I1005 21:38:06.484313 1518222 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1005 21:38:06.484318 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.484328 1518222 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1005 21:38:06.484333 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.484338 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.484351 1518222 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1005 21:38:06.484360 1518222 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1005 21:38:06.484368 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.484373 1518222 command_runner.go:130] >       "size": "520014",
	I1005 21:38:06.484378 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.484384 1518222 command_runner.go:130] >         "value": "65535"
	I1005 21:38:06.484388 1518222 command_runner.go:130] >       },
	I1005 21:38:06.484395 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.484400 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.484409 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.484414 1518222 command_runner.go:130] >     }
	I1005 21:38:06.484421 1518222 command_runner.go:130] >   ]
	I1005 21:38:06.484425 1518222 command_runner.go:130] > }
	I1005 21:38:06.487885 1518222 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 21:38:06.487913 1518222 crio.go:415] Images already preloaded, skipping extraction
	I1005 21:38:06.487992 1518222 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:38:06.529739 1518222 command_runner.go:130] > {
	I1005 21:38:06.529758 1518222 command_runner.go:130] >   "images": [
	I1005 21:38:06.529764 1518222 command_runner.go:130] >     {
	I1005 21:38:06.529773 1518222 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1005 21:38:06.529779 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.529786 1518222 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1005 21:38:06.529791 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.529796 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.529807 1518222 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1005 21:38:06.529819 1518222 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1005 21:38:06.529827 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.529833 1518222 command_runner.go:130] >       "size": "60867618",
	I1005 21:38:06.529838 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.529846 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.529854 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.529863 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.529867 1518222 command_runner.go:130] >     },
	I1005 21:38:06.529871 1518222 command_runner.go:130] >     {
	I1005 21:38:06.529881 1518222 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1005 21:38:06.529886 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.529895 1518222 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1005 21:38:06.529900 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.529904 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.529914 1518222 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1005 21:38:06.529924 1518222 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1005 21:38:06.529928 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.529936 1518222 command_runner.go:130] >       "size": "29037500",
	I1005 21:38:06.529941 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.529946 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.529951 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.529956 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.529962 1518222 command_runner.go:130] >     },
	I1005 21:38:06.529968 1518222 command_runner.go:130] >     {
	I1005 21:38:06.529976 1518222 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1005 21:38:06.529983 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.529990 1518222 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1005 21:38:06.529997 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530002 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.530013 1518222 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1005 21:38:06.530023 1518222 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1005 21:38:06.530028 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530035 1518222 command_runner.go:130] >       "size": "51393451",
	I1005 21:38:06.530041 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.530045 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.530050 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.530055 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.530060 1518222 command_runner.go:130] >     },
	I1005 21:38:06.530064 1518222 command_runner.go:130] >     {
	I1005 21:38:06.530074 1518222 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1005 21:38:06.530082 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.530090 1518222 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1005 21:38:06.530094 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530102 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.530111 1518222 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1005 21:38:06.530122 1518222 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1005 21:38:06.530129 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530134 1518222 command_runner.go:130] >       "size": "182203183",
	I1005 21:38:06.530139 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.530144 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.530150 1518222 command_runner.go:130] >       },
	I1005 21:38:06.530156 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.530163 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.530168 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.530175 1518222 command_runner.go:130] >     },
	I1005 21:38:06.530179 1518222 command_runner.go:130] >     {
	I1005 21:38:06.530187 1518222 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1005 21:38:06.530195 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.530201 1518222 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1005 21:38:06.530214 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530222 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.530231 1518222 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1005 21:38:06.530240 1518222 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1005 21:38:06.530247 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530252 1518222 command_runner.go:130] >       "size": "121054158",
	I1005 21:38:06.530257 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.530264 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.530268 1518222 command_runner.go:130] >       },
	I1005 21:38:06.530273 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.530281 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.530286 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.530291 1518222 command_runner.go:130] >     },
	I1005 21:38:06.530295 1518222 command_runner.go:130] >     {
	I1005 21:38:06.530303 1518222 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1005 21:38:06.530312 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.530319 1518222 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1005 21:38:06.530324 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530331 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.530341 1518222 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1005 21:38:06.530355 1518222 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1005 21:38:06.530360 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530367 1518222 command_runner.go:130] >       "size": "117187380",
	I1005 21:38:06.530371 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.530377 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.530381 1518222 command_runner.go:130] >       },
	I1005 21:38:06.530386 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.530393 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.530401 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.530405 1518222 command_runner.go:130] >     },
	I1005 21:38:06.530412 1518222 command_runner.go:130] >     {
	I1005 21:38:06.530420 1518222 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1005 21:38:06.530424 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.530430 1518222 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1005 21:38:06.530437 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530442 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.530452 1518222 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1005 21:38:06.530463 1518222 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1005 21:38:06.530468 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530476 1518222 command_runner.go:130] >       "size": "69926807",
	I1005 21:38:06.530483 1518222 command_runner.go:130] >       "uid": null,
	I1005 21:38:06.530488 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.530496 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.530501 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.530506 1518222 command_runner.go:130] >     },
	I1005 21:38:06.530510 1518222 command_runner.go:130] >     {
	I1005 21:38:06.530522 1518222 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1005 21:38:06.530527 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.530534 1518222 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1005 21:38:06.530540 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530545 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.530576 1518222 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1005 21:38:06.530589 1518222 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1005 21:38:06.530593 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530599 1518222 command_runner.go:130] >       "size": "59188020",
	I1005 21:38:06.530606 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.530611 1518222 command_runner.go:130] >         "value": "0"
	I1005 21:38:06.530615 1518222 command_runner.go:130] >       },
	I1005 21:38:06.530620 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.530625 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.530630 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.530636 1518222 command_runner.go:130] >     },
	I1005 21:38:06.530641 1518222 command_runner.go:130] >     {
	I1005 21:38:06.530651 1518222 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1005 21:38:06.530658 1518222 command_runner.go:130] >       "repoTags": [
	I1005 21:38:06.530664 1518222 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1005 21:38:06.530669 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530676 1518222 command_runner.go:130] >       "repoDigests": [
	I1005 21:38:06.530691 1518222 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1005 21:38:06.530700 1518222 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1005 21:38:06.530705 1518222 command_runner.go:130] >       ],
	I1005 21:38:06.530709 1518222 command_runner.go:130] >       "size": "520014",
	I1005 21:38:06.530717 1518222 command_runner.go:130] >       "uid": {
	I1005 21:38:06.530724 1518222 command_runner.go:130] >         "value": "65535"
	I1005 21:38:06.530729 1518222 command_runner.go:130] >       },
	I1005 21:38:06.530736 1518222 command_runner.go:130] >       "username": "",
	I1005 21:38:06.530741 1518222 command_runner.go:130] >       "spec": null,
	I1005 21:38:06.530746 1518222 command_runner.go:130] >       "pinned": false
	I1005 21:38:06.530750 1518222 command_runner.go:130] >     }
	I1005 21:38:06.530756 1518222 command_runner.go:130] >   ]
	I1005 21:38:06.530761 1518222 command_runner.go:130] > }
	I1005 21:38:06.530891 1518222 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 21:38:06.530903 1518222 cache_images.go:84] Images are preloaded, skipping loading
	I1005 21:38:06.530977 1518222 ssh_runner.go:195] Run: crio config
	I1005 21:38:06.581926 1518222 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1005 21:38:06.581951 1518222 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1005 21:38:06.581960 1518222 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1005 21:38:06.581964 1518222 command_runner.go:130] > #
	I1005 21:38:06.581973 1518222 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1005 21:38:06.581981 1518222 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1005 21:38:06.581990 1518222 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1005 21:38:06.582013 1518222 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1005 21:38:06.582022 1518222 command_runner.go:130] > # reload'.
	I1005 21:38:06.582030 1518222 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1005 21:38:06.582043 1518222 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1005 21:38:06.582052 1518222 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1005 21:38:06.582063 1518222 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1005 21:38:06.582067 1518222 command_runner.go:130] > [crio]
	I1005 21:38:06.582075 1518222 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1005 21:38:06.582084 1518222 command_runner.go:130] > # containers images, in this directory.
	I1005 21:38:06.582092 1518222 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1005 21:38:06.582103 1518222 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1005 21:38:06.582110 1518222 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1005 21:38:06.582122 1518222 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1005 21:38:06.582130 1518222 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1005 21:38:06.582369 1518222 command_runner.go:130] > # storage_driver = "vfs"
	I1005 21:38:06.582391 1518222 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1005 21:38:06.582399 1518222 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1005 21:38:06.582404 1518222 command_runner.go:130] > # storage_option = [
	I1005 21:38:06.582411 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.582420 1518222 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1005 21:38:06.582430 1518222 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1005 21:38:06.582695 1518222 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1005 21:38:06.582714 1518222 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1005 21:38:06.582722 1518222 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1005 21:38:06.582728 1518222 command_runner.go:130] > # always happen on a node reboot
	I1005 21:38:06.582737 1518222 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1005 21:38:06.582744 1518222 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1005 21:38:06.582757 1518222 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1005 21:38:06.582767 1518222 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1005 21:38:06.582776 1518222 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1005 21:38:06.582786 1518222 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1005 21:38:06.582799 1518222 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1005 21:38:06.582805 1518222 command_runner.go:130] > # internal_wipe = true
	I1005 21:38:06.582812 1518222 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1005 21:38:06.582822 1518222 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1005 21:38:06.582830 1518222 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1005 21:38:06.582840 1518222 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1005 21:38:06.582852 1518222 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1005 21:38:06.582860 1518222 command_runner.go:130] > [crio.api]
	I1005 21:38:06.582867 1518222 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1005 21:38:06.582873 1518222 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1005 21:38:06.582892 1518222 command_runner.go:130] > # IP address on which the stream server will listen.
	I1005 21:38:06.582898 1518222 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1005 21:38:06.582906 1518222 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1005 21:38:06.582914 1518222 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1005 21:38:06.582919 1518222 command_runner.go:130] > # stream_port = "0"
	I1005 21:38:06.582926 1518222 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1005 21:38:06.582943 1518222 command_runner.go:130] > # stream_enable_tls = false
	I1005 21:38:06.582955 1518222 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1005 21:38:06.582974 1518222 command_runner.go:130] > # stream_idle_timeout = ""
	I1005 21:38:06.582983 1518222 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1005 21:38:06.582991 1518222 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1005 21:38:06.582995 1518222 command_runner.go:130] > # minutes.
	I1005 21:38:06.583003 1518222 command_runner.go:130] > # stream_tls_cert = ""
	I1005 21:38:06.583010 1518222 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1005 21:38:06.583018 1518222 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1005 21:38:06.583027 1518222 command_runner.go:130] > # stream_tls_key = ""
	I1005 21:38:06.583034 1518222 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1005 21:38:06.583046 1518222 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1005 21:38:06.583055 1518222 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1005 21:38:06.583064 1518222 command_runner.go:130] > # stream_tls_ca = ""
	I1005 21:38:06.583073 1518222 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 21:38:06.583083 1518222 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1005 21:38:06.583092 1518222 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 21:38:06.583098 1518222 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1005 21:38:06.583125 1518222 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1005 21:38:06.583137 1518222 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1005 21:38:06.583142 1518222 command_runner.go:130] > [crio.runtime]
	I1005 21:38:06.583155 1518222 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1005 21:38:06.583162 1518222 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1005 21:38:06.583167 1518222 command_runner.go:130] > # "nofile=1024:2048"
	I1005 21:38:06.583178 1518222 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1005 21:38:06.583184 1518222 command_runner.go:130] > # default_ulimits = [
	I1005 21:38:06.583188 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.583199 1518222 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1005 21:38:06.583210 1518222 command_runner.go:130] > # no_pivot = false
	I1005 21:38:06.583217 1518222 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1005 21:38:06.583228 1518222 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1005 21:38:06.583235 1518222 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1005 21:38:06.583246 1518222 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1005 21:38:06.583252 1518222 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1005 21:38:06.583261 1518222 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 21:38:06.583269 1518222 command_runner.go:130] > # conmon = ""
	I1005 21:38:06.583277 1518222 command_runner.go:130] > # Cgroup setting for conmon
	I1005 21:38:06.583291 1518222 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1005 21:38:06.583297 1518222 command_runner.go:130] > conmon_cgroup = "pod"
	I1005 21:38:06.583308 1518222 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1005 21:38:06.583315 1518222 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1005 21:38:06.583327 1518222 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 21:38:06.583333 1518222 command_runner.go:130] > # conmon_env = [
	I1005 21:38:06.583338 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.583344 1518222 command_runner.go:130] > # Additional environment variables to set for all the
	I1005 21:38:06.583351 1518222 command_runner.go:130] > # containers. These are overridden if set in the
	I1005 21:38:06.583358 1518222 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1005 21:38:06.583367 1518222 command_runner.go:130] > # default_env = [
	I1005 21:38:06.583372 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.583379 1518222 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1005 21:38:06.583387 1518222 command_runner.go:130] > # selinux = false
	I1005 21:38:06.583395 1518222 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1005 21:38:06.583408 1518222 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1005 21:38:06.583415 1518222 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1005 21:38:06.583420 1518222 command_runner.go:130] > # seccomp_profile = ""
	I1005 21:38:06.583427 1518222 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1005 21:38:06.583434 1518222 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1005 21:38:06.583445 1518222 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1005 21:38:06.583451 1518222 command_runner.go:130] > # which might increase security.
	I1005 21:38:06.583461 1518222 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1005 21:38:06.583469 1518222 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1005 21:38:06.583476 1518222 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1005 21:38:06.583487 1518222 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1005 21:38:06.583495 1518222 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1005 21:38:06.583501 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:38:06.583507 1518222 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1005 21:38:06.583517 1518222 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1005 21:38:06.583526 1518222 command_runner.go:130] > # the cgroup blockio controller.
	I1005 21:38:06.583531 1518222 command_runner.go:130] > # blockio_config_file = ""
	I1005 21:38:06.583545 1518222 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1005 21:38:06.583553 1518222 command_runner.go:130] > # irqbalance daemon.
	I1005 21:38:06.583819 1518222 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1005 21:38:06.583836 1518222 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1005 21:38:06.583844 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:38:06.583849 1518222 command_runner.go:130] > # rdt_config_file = ""
	I1005 21:38:06.583857 1518222 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1005 21:38:06.583862 1518222 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1005 21:38:06.583870 1518222 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1005 21:38:06.583879 1518222 command_runner.go:130] > # separate_pull_cgroup = ""
	I1005 21:38:06.583887 1518222 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1005 21:38:06.583899 1518222 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1005 21:38:06.583905 1518222 command_runner.go:130] > # will be added.
	I1005 21:38:06.583914 1518222 command_runner.go:130] > # default_capabilities = [
	I1005 21:38:06.583919 1518222 command_runner.go:130] > # 	"CHOWN",
	I1005 21:38:06.583924 1518222 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1005 21:38:06.583928 1518222 command_runner.go:130] > # 	"FSETID",
	I1005 21:38:06.583935 1518222 command_runner.go:130] > # 	"FOWNER",
	I1005 21:38:06.583940 1518222 command_runner.go:130] > # 	"SETGID",
	I1005 21:38:06.583945 1518222 command_runner.go:130] > # 	"SETUID",
	I1005 21:38:06.583954 1518222 command_runner.go:130] > # 	"SETPCAP",
	I1005 21:38:06.583959 1518222 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1005 21:38:06.583964 1518222 command_runner.go:130] > # 	"KILL",
	I1005 21:38:06.583973 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.583982 1518222 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1005 21:38:06.583994 1518222 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1005 21:38:06.584000 1518222 command_runner.go:130] > # add_inheritable_capabilities = true
	I1005 21:38:06.584013 1518222 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1005 21:38:06.584020 1518222 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 21:38:06.584025 1518222 command_runner.go:130] > # default_sysctls = [
	I1005 21:38:06.584031 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.584037 1518222 command_runner.go:130] > # List of devices on the host that a
	I1005 21:38:06.584049 1518222 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1005 21:38:06.584054 1518222 command_runner.go:130] > # allowed_devices = [
	I1005 21:38:06.584063 1518222 command_runner.go:130] > # 	"/dev/fuse",
	I1005 21:38:06.584067 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.584073 1518222 command_runner.go:130] > # List of additional devices. specified as
	I1005 21:38:06.584092 1518222 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1005 21:38:06.584099 1518222 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1005 21:38:06.584106 1518222 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 21:38:06.584120 1518222 command_runner.go:130] > # additional_devices = [
	I1005 21:38:06.584347 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.584362 1518222 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1005 21:38:06.584367 1518222 command_runner.go:130] > # cdi_spec_dirs = [
	I1005 21:38:06.584372 1518222 command_runner.go:130] > # 	"/etc/cdi",
	I1005 21:38:06.584376 1518222 command_runner.go:130] > # 	"/var/run/cdi",
	I1005 21:38:06.584380 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.584388 1518222 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1005 21:38:06.584396 1518222 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1005 21:38:06.584405 1518222 command_runner.go:130] > # Defaults to false.
	I1005 21:38:06.584411 1518222 command_runner.go:130] > # device_ownership_from_security_context = false
	I1005 21:38:06.584419 1518222 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1005 21:38:06.584430 1518222 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1005 21:38:06.584436 1518222 command_runner.go:130] > # hooks_dir = [
	I1005 21:38:06.584446 1518222 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1005 21:38:06.584450 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.584462 1518222 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1005 21:38:06.584469 1518222 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1005 21:38:06.584480 1518222 command_runner.go:130] > # its default mounts from the following two files:
	I1005 21:38:06.584484 1518222 command_runner.go:130] > #
	I1005 21:38:06.584492 1518222 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1005 21:38:06.584503 1518222 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1005 21:38:06.584510 1518222 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1005 21:38:06.584517 1518222 command_runner.go:130] > #
	I1005 21:38:06.584525 1518222 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1005 21:38:06.584533 1518222 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1005 21:38:06.584540 1518222 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1005 21:38:06.584547 1518222 command_runner.go:130] > #      only add mounts it finds in this file.
	I1005 21:38:06.584554 1518222 command_runner.go:130] > #
	I1005 21:38:06.584560 1518222 command_runner.go:130] > # default_mounts_file = ""
	I1005 21:38:06.584567 1518222 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1005 21:38:06.584579 1518222 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1005 21:38:06.584584 1518222 command_runner.go:130] > # pids_limit = 0
	I1005 21:38:06.584596 1518222 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1005 21:38:06.584604 1518222 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1005 21:38:06.584616 1518222 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1005 21:38:06.584626 1518222 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1005 21:38:06.584631 1518222 command_runner.go:130] > # log_size_max = -1
	I1005 21:38:06.584640 1518222 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1005 21:38:06.584653 1518222 command_runner.go:130] > # log_to_journald = false
	I1005 21:38:06.584661 1518222 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1005 21:38:06.584671 1518222 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1005 21:38:06.584678 1518222 command_runner.go:130] > # Path to directory for container attach sockets.
	I1005 21:38:06.584687 1518222 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1005 21:38:06.584694 1518222 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1005 21:38:06.584699 1518222 command_runner.go:130] > # bind_mount_prefix = ""
	I1005 21:38:06.584706 1518222 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1005 21:38:06.584711 1518222 command_runner.go:130] > # read_only = false
	I1005 21:38:06.584719 1518222 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1005 21:38:06.584730 1518222 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1005 21:38:06.584736 1518222 command_runner.go:130] > # live configuration reload.
	I1005 21:38:06.584761 1518222 command_runner.go:130] > # log_level = "info"
	I1005 21:38:06.584773 1518222 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1005 21:38:06.584780 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:38:06.584785 1518222 command_runner.go:130] > # log_filter = ""
	I1005 21:38:06.584792 1518222 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1005 21:38:06.584809 1518222 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1005 21:38:06.584817 1518222 command_runner.go:130] > # separated by comma.
	I1005 21:38:06.584822 1518222 command_runner.go:130] > # uid_mappings = ""
	I1005 21:38:06.584830 1518222 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1005 21:38:06.584841 1518222 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1005 21:38:06.584846 1518222 command_runner.go:130] > # separated by comma.
	I1005 21:38:06.584859 1518222 command_runner.go:130] > # gid_mappings = ""
	I1005 21:38:06.584867 1518222 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1005 21:38:06.584875 1518222 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 21:38:06.584882 1518222 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 21:38:06.584892 1518222 command_runner.go:130] > # minimum_mappable_uid = -1
	I1005 21:38:06.584900 1518222 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1005 21:38:06.584911 1518222 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 21:38:06.584919 1518222 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 21:38:06.584929 1518222 command_runner.go:130] > # minimum_mappable_gid = -1
	I1005 21:38:06.584937 1518222 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1005 21:38:06.584951 1518222 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1005 21:38:06.584959 1518222 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1005 21:38:06.584964 1518222 command_runner.go:130] > # ctr_stop_timeout = 30
	I1005 21:38:06.584971 1518222 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1005 21:38:06.584982 1518222 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1005 21:38:06.584991 1518222 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1005 21:38:06.585001 1518222 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1005 21:38:06.585006 1518222 command_runner.go:130] > # drop_infra_ctr = true
	I1005 21:38:06.585019 1518222 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1005 21:38:06.585027 1518222 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1005 21:38:06.585036 1518222 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1005 21:38:06.585041 1518222 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1005 21:38:06.585048 1518222 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1005 21:38:06.585059 1518222 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1005 21:38:06.585065 1518222 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1005 21:38:06.585077 1518222 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1005 21:38:06.585419 1518222 command_runner.go:130] > # pinns_path = ""
	I1005 21:38:06.585435 1518222 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1005 21:38:06.585443 1518222 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1005 21:38:06.585451 1518222 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1005 21:38:06.585457 1518222 command_runner.go:130] > # default_runtime = "runc"
	I1005 21:38:06.585478 1518222 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1005 21:38:06.585492 1518222 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1005 21:38:06.585509 1518222 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1005 21:38:06.585516 1518222 command_runner.go:130] > # creation as a file is not desired either.
	I1005 21:38:06.585526 1518222 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1005 21:38:06.585532 1518222 command_runner.go:130] > # the hostname is being managed dynamically.
	I1005 21:38:06.585538 1518222 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1005 21:38:06.585542 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.585549 1518222 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1005 21:38:06.585557 1518222 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1005 21:38:06.585566 1518222 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1005 21:38:06.585574 1518222 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1005 21:38:06.585581 1518222 command_runner.go:130] > #
	I1005 21:38:06.585587 1518222 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1005 21:38:06.585593 1518222 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1005 21:38:06.585598 1518222 command_runner.go:130] > #  runtime_type = "oci"
	I1005 21:38:06.585606 1518222 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1005 21:38:06.585612 1518222 command_runner.go:130] > #  privileged_without_host_devices = false
	I1005 21:38:06.585618 1518222 command_runner.go:130] > #  allowed_annotations = []
	I1005 21:38:06.585623 1518222 command_runner.go:130] > # Where:
	I1005 21:38:06.585632 1518222 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1005 21:38:06.585645 1518222 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1005 21:38:06.585657 1518222 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1005 21:38:06.585665 1518222 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1005 21:38:06.585673 1518222 command_runner.go:130] > #   in $PATH.
	I1005 21:38:06.585681 1518222 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1005 21:38:06.585691 1518222 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1005 21:38:06.585699 1518222 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1005 21:38:06.585706 1518222 command_runner.go:130] > #   state.
	I1005 21:38:06.585714 1518222 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1005 21:38:06.585722 1518222 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1005 21:38:06.585730 1518222 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1005 21:38:06.585740 1518222 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1005 21:38:06.585749 1518222 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1005 21:38:06.585760 1518222 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1005 21:38:06.585771 1518222 command_runner.go:130] > #   The currently recognized values are:
	I1005 21:38:06.585783 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1005 21:38:06.585792 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1005 21:38:06.585800 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1005 21:38:06.585808 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1005 21:38:06.585821 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1005 21:38:06.585833 1518222 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1005 21:38:06.585844 1518222 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1005 21:38:06.585856 1518222 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1005 21:38:06.585866 1518222 command_runner.go:130] > #   should be moved to the container's cgroup
	I1005 21:38:06.585871 1518222 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1005 21:38:06.585878 1518222 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1005 21:38:06.585882 1518222 command_runner.go:130] > runtime_type = "oci"
	I1005 21:38:06.585888 1518222 command_runner.go:130] > runtime_root = "/run/runc"
	I1005 21:38:06.585896 1518222 command_runner.go:130] > runtime_config_path = ""
	I1005 21:38:06.585902 1518222 command_runner.go:130] > monitor_path = ""
	I1005 21:38:06.585910 1518222 command_runner.go:130] > monitor_cgroup = ""
	I1005 21:38:06.585915 1518222 command_runner.go:130] > monitor_exec_cgroup = ""
	I1005 21:38:06.585951 1518222 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1005 21:38:06.585961 1518222 command_runner.go:130] > # running containers
	I1005 21:38:06.585966 1518222 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1005 21:38:06.585975 1518222 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1005 21:38:06.585985 1518222 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1005 21:38:06.585997 1518222 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1005 21:38:06.586006 1518222 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1005 21:38:06.586016 1518222 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1005 21:38:06.586022 1518222 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1005 21:38:06.586028 1518222 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1005 21:38:06.586034 1518222 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1005 21:38:06.586043 1518222 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1005 21:38:06.586051 1518222 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1005 21:38:06.586061 1518222 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1005 21:38:06.586143 1518222 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1005 21:38:06.586162 1518222 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1005 21:38:06.586173 1518222 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1005 21:38:06.586198 1518222 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1005 21:38:06.586215 1518222 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1005 21:38:06.586229 1518222 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1005 21:38:06.586240 1518222 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1005 21:38:06.586249 1518222 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1005 21:38:06.586254 1518222 command_runner.go:130] > # Example:
	I1005 21:38:06.586260 1518222 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1005 21:38:06.586270 1518222 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1005 21:38:06.586276 1518222 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1005 21:38:06.586286 1518222 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1005 21:38:06.586294 1518222 command_runner.go:130] > # cpuset = 0
	I1005 21:38:06.586299 1518222 command_runner.go:130] > # cpushares = "0-1"
	I1005 21:38:06.586307 1518222 command_runner.go:130] > # Where:
	I1005 21:38:06.586313 1518222 command_runner.go:130] > # The workload name is workload-type.
	I1005 21:38:06.586325 1518222 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1005 21:38:06.586332 1518222 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1005 21:38:06.586339 1518222 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1005 21:38:06.586349 1518222 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1005 21:38:06.586360 1518222 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1005 21:38:06.586368 1518222 command_runner.go:130] > # 
	I1005 21:38:06.586377 1518222 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1005 21:38:06.586384 1518222 command_runner.go:130] > #
	I1005 21:38:06.586397 1518222 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1005 21:38:06.586408 1518222 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1005 21:38:06.586416 1518222 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1005 21:38:06.586424 1518222 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1005 21:38:06.586434 1518222 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1005 21:38:06.586439 1518222 command_runner.go:130] > [crio.image]
	I1005 21:38:06.586451 1518222 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1005 21:38:06.586461 1518222 command_runner.go:130] > # default_transport = "docker://"
	I1005 21:38:06.586472 1518222 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1005 21:38:06.586483 1518222 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1005 21:38:06.586488 1518222 command_runner.go:130] > # global_auth_file = ""
	I1005 21:38:06.586494 1518222 command_runner.go:130] > # The image used to instantiate infra containers.
	I1005 21:38:06.586501 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:38:06.586511 1518222 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1005 21:38:06.586521 1518222 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1005 21:38:06.586531 1518222 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1005 21:38:06.586541 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:38:06.586550 1518222 command_runner.go:130] > # pause_image_auth_file = ""
	I1005 21:38:06.586557 1518222 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1005 21:38:06.586566 1518222 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1005 21:38:06.586575 1518222 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1005 21:38:06.586582 1518222 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1005 21:38:06.586591 1518222 command_runner.go:130] > # pause_command = "/pause"
	I1005 21:38:06.586598 1518222 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1005 21:38:06.586609 1518222 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1005 21:38:06.586620 1518222 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1005 21:38:06.586630 1518222 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1005 21:38:06.586640 1518222 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1005 21:38:06.586645 1518222 command_runner.go:130] > # signature_policy = ""
	I1005 21:38:06.586653 1518222 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1005 21:38:06.586661 1518222 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1005 21:38:06.586666 1518222 command_runner.go:130] > # changing them here.
	I1005 21:38:06.586675 1518222 command_runner.go:130] > # insecure_registries = [
	I1005 21:38:06.586679 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.586691 1518222 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1005 21:38:06.586700 1518222 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1005 21:38:06.586930 1518222 command_runner.go:130] > # image_volumes = "mkdir"
	I1005 21:38:06.586945 1518222 command_runner.go:130] > # Temporary directory to use for storing big files
	I1005 21:38:06.586951 1518222 command_runner.go:130] > # big_files_temporary_dir = ""
	I1005 21:38:06.586959 1518222 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1005 21:38:06.586963 1518222 command_runner.go:130] > # CNI plugins.
	I1005 21:38:06.586968 1518222 command_runner.go:130] > [crio.network]
	I1005 21:38:06.586990 1518222 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1005 21:38:06.586998 1518222 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1005 21:38:06.587004 1518222 command_runner.go:130] > # cni_default_network = ""
	I1005 21:38:06.587011 1518222 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1005 21:38:06.587017 1518222 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1005 21:38:06.587024 1518222 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1005 21:38:06.587029 1518222 command_runner.go:130] > # plugin_dirs = [
	I1005 21:38:06.587034 1518222 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1005 21:38:06.587038 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.587045 1518222 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1005 21:38:06.587051 1518222 command_runner.go:130] > [crio.metrics]
	I1005 21:38:06.587057 1518222 command_runner.go:130] > # Globally enable or disable metrics support.
	I1005 21:38:06.587063 1518222 command_runner.go:130] > # enable_metrics = false
	I1005 21:38:06.587068 1518222 command_runner.go:130] > # Specify enabled metrics collectors.
	I1005 21:38:06.587074 1518222 command_runner.go:130] > # Per default all metrics are enabled.
	I1005 21:38:06.587082 1518222 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1005 21:38:06.587089 1518222 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1005 21:38:06.587097 1518222 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1005 21:38:06.587102 1518222 command_runner.go:130] > # metrics_collectors = [
	I1005 21:38:06.587107 1518222 command_runner.go:130] > # 	"operations",
	I1005 21:38:06.587113 1518222 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1005 21:38:06.587119 1518222 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1005 21:38:06.587124 1518222 command_runner.go:130] > # 	"operations_errors",
	I1005 21:38:06.587129 1518222 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1005 21:38:06.587135 1518222 command_runner.go:130] > # 	"image_pulls_by_name",
	I1005 21:38:06.587140 1518222 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1005 21:38:06.587145 1518222 command_runner.go:130] > # 	"image_pulls_failures",
	I1005 21:38:06.587151 1518222 command_runner.go:130] > # 	"image_pulls_successes",
	I1005 21:38:06.587156 1518222 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1005 21:38:06.587166 1518222 command_runner.go:130] > # 	"image_layer_reuse",
	I1005 21:38:06.587172 1518222 command_runner.go:130] > # 	"containers_oom_total",
	I1005 21:38:06.587177 1518222 command_runner.go:130] > # 	"containers_oom",
	I1005 21:38:06.587182 1518222 command_runner.go:130] > # 	"processes_defunct",
	I1005 21:38:06.587187 1518222 command_runner.go:130] > # 	"operations_total",
	I1005 21:38:06.587193 1518222 command_runner.go:130] > # 	"operations_latency_seconds",
	I1005 21:38:06.587199 1518222 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1005 21:38:06.587204 1518222 command_runner.go:130] > # 	"operations_errors_total",
	I1005 21:38:06.587210 1518222 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1005 21:38:06.587217 1518222 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1005 21:38:06.587229 1518222 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1005 21:38:06.587235 1518222 command_runner.go:130] > # 	"image_pulls_success_total",
	I1005 21:38:06.587240 1518222 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1005 21:38:06.587246 1518222 command_runner.go:130] > # 	"containers_oom_count_total",
	I1005 21:38:06.587250 1518222 command_runner.go:130] > # ]
	I1005 21:38:06.587256 1518222 command_runner.go:130] > # The port on which the metrics server will listen.
	I1005 21:38:06.587468 1518222 command_runner.go:130] > # metrics_port = 9090
	I1005 21:38:06.587482 1518222 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1005 21:38:06.587487 1518222 command_runner.go:130] > # metrics_socket = ""
	I1005 21:38:06.587493 1518222 command_runner.go:130] > # The certificate for the secure metrics server.
	I1005 21:38:06.587503 1518222 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1005 21:38:06.587512 1518222 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1005 21:38:06.587517 1518222 command_runner.go:130] > # certificate on any modification event.
	I1005 21:38:06.587534 1518222 command_runner.go:130] > # metrics_cert = ""
	I1005 21:38:06.587544 1518222 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1005 21:38:06.587550 1518222 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1005 21:38:06.587555 1518222 command_runner.go:130] > # metrics_key = ""
	I1005 21:38:06.587562 1518222 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1005 21:38:06.587566 1518222 command_runner.go:130] > [crio.tracing]
	I1005 21:38:06.587573 1518222 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1005 21:38:06.587578 1518222 command_runner.go:130] > # enable_tracing = false
	I1005 21:38:06.587584 1518222 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1005 21:38:06.587590 1518222 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1005 21:38:06.587596 1518222 command_runner.go:130] > # Number of samples to collect per million spans.
	I1005 21:38:06.587602 1518222 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1005 21:38:06.587609 1518222 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1005 21:38:06.587614 1518222 command_runner.go:130] > [crio.stats]
	I1005 21:38:06.587621 1518222 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1005 21:38:06.587628 1518222 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1005 21:38:06.587634 1518222 command_runner.go:130] > # stats_collection_period = 0
	I1005 21:38:06.588448 1518222 command_runner.go:130] ! time="2023-10-05 21:38:06.579334644Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1005 21:38:06.588484 1518222 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1005 21:38:06.588573 1518222 cni.go:84] Creating CNI manager for ""
	I1005 21:38:06.588581 1518222 cni.go:136] 1 nodes found, recommending kindnet
	I1005 21:38:06.588608 1518222 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:38:06.588630 1518222 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-814558 NodeName:multinode-814558 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 21:38:06.588765 1518222 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-814558"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:38:06.588840 1518222 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-814558 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-814558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:38:06.588907 1518222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 21:38:06.598794 1518222 command_runner.go:130] > kubeadm
	I1005 21:38:06.598810 1518222 command_runner.go:130] > kubectl
	I1005 21:38:06.598815 1518222 command_runner.go:130] > kubelet
	I1005 21:38:06.600170 1518222 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:38:06.600249 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 21:38:06.611597 1518222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1005 21:38:06.632947 1518222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 21:38:06.654594 1518222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1005 21:38:06.676192 1518222 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:38:06.680829 1518222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:38:06.694794 1518222 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558 for IP: 192.168.58.2
	I1005 21:38:06.694825 1518222 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac5d4c0ae883432caac512ac8160283213d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:06.694972 1518222 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key
	I1005 21:38:06.695046 1518222 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key
	I1005 21:38:06.695097 1518222 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key
	I1005 21:38:06.695112 1518222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt with IP's: []
	I1005 21:38:07.049522 1518222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt ...
	I1005 21:38:07.049555 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt: {Name:mk97390e3f80ed772be03484abb08467327f223b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:07.049768 1518222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key ...
	I1005 21:38:07.049781 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key: {Name:mkb3be74e9c3002b537bc2963a4114365de521cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:07.049876 1518222 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.key.cee25041
	I1005 21:38:07.049891 1518222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1005 21:38:07.910032 1518222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.crt.cee25041 ...
	I1005 21:38:07.910064 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.crt.cee25041: {Name:mk6867aab936eb7a3cfab273fba6d102f3f605ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:07.910252 1518222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.key.cee25041 ...
	I1005 21:38:07.910265 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.key.cee25041: {Name:mk94ab03d541220f1450b863b56414743c7b4a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:07.910348 1518222 certs.go:337] copying /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.crt
	I1005 21:38:07.910424 1518222 certs.go:341] copying /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.key
	I1005 21:38:07.910482 1518222 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.key
	I1005 21:38:07.910499 1518222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.crt with IP's: []
	I1005 21:38:08.218548 1518222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.crt ...
	I1005 21:38:08.218578 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.crt: {Name:mk17a8022af23a2a932091ccd67e60c4308ae204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:08.218765 1518222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.key ...
	I1005 21:38:08.218778 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.key: {Name:mkd9994624286de25530b3b720b73a3019a740f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:08.218863 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1005 21:38:08.218883 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1005 21:38:08.218895 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1005 21:38:08.218909 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1005 21:38:08.218921 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1005 21:38:08.218938 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1005 21:38:08.218952 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1005 21:38:08.218964 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1005 21:38:08.219024 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem (1338 bytes)
	W1005 21:38:08.219063 1518222 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786_empty.pem, impossibly tiny 0 bytes
	I1005 21:38:08.219076 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:38:08.219101 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:38:08.219134 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:38:08.219165 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem (1675 bytes)
	I1005 21:38:08.219209 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:38:08.219240 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:38:08.219252 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem -> /usr/share/ca-certificates/1453786.pem
	I1005 21:38:08.219263 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> /usr/share/ca-certificates/14537862.pem
	I1005 21:38:08.219889 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 21:38:08.250594 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1005 21:38:08.280987 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 21:38:08.309839 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 21:38:08.338352 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:38:08.367720 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 21:38:08.395970 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:38:08.424786 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:38:08.454036 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:38:08.483653 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem --> /usr/share/ca-certificates/1453786.pem (1338 bytes)
	I1005 21:38:08.513311 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /usr/share/ca-certificates/14537862.pem (1708 bytes)
	I1005 21:38:08.543108 1518222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 21:38:08.564577 1518222 ssh_runner.go:195] Run: openssl version
	I1005 21:38:08.571683 1518222 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1005 21:38:08.572030 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:38:08.583965 1518222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:38:08.588358 1518222 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  5 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:38:08.588652 1518222 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:38:08.588739 1518222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:38:08.597026 1518222 command_runner.go:130] > b5213941
	I1005 21:38:08.597654 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:38:08.609762 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1453786.pem && ln -fs /usr/share/ca-certificates/1453786.pem /etc/ssl/certs/1453786.pem"
	I1005 21:38:08.623228 1518222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1453786.pem
	I1005 21:38:08.628169 1518222 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  5 21:22 /usr/share/ca-certificates/1453786.pem
	I1005 21:38:08.628535 1518222 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 21:22 /usr/share/ca-certificates/1453786.pem
	I1005 21:38:08.628599 1518222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1453786.pem
	I1005 21:38:08.637157 1518222 command_runner.go:130] > 51391683
	I1005 21:38:08.637568 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1453786.pem /etc/ssl/certs/51391683.0"
	I1005 21:38:08.650641 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14537862.pem && ln -fs /usr/share/ca-certificates/14537862.pem /etc/ssl/certs/14537862.pem"
	I1005 21:38:08.663485 1518222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14537862.pem
	I1005 21:38:08.668338 1518222 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  5 21:22 /usr/share/ca-certificates/14537862.pem
	I1005 21:38:08.668477 1518222 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 21:22 /usr/share/ca-certificates/14537862.pem
	I1005 21:38:08.668572 1518222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14537862.pem
	I1005 21:38:08.676864 1518222 command_runner.go:130] > 3ec20f2e
	I1005 21:38:08.677356 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14537862.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 21:38:08.689217 1518222 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:38:08.693891 1518222 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:38:08.693968 1518222 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:38:08.694026 1518222 kubeadm.go:404] StartCluster: {Name:multinode-814558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-814558 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:38:08.694105 1518222 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1005 21:38:08.694166 1518222 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:38:08.738155 1518222 cri.go:89] found id: ""
	I1005 21:38:08.738270 1518222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 21:38:08.749214 1518222 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1005 21:38:08.749243 1518222 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1005 21:38:08.749252 1518222 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1005 21:38:08.749370 1518222 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 21:38:08.760259 1518222 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1005 21:38:08.760370 1518222 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 21:38:08.770309 1518222 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1005 21:38:08.770377 1518222 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1005 21:38:08.771539 1518222 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1005 21:38:08.771558 1518222 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 21:38:08.771593 1518222 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1005 21:38:08.771629 1518222 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1005 21:38:08.825536 1518222 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1005 21:38:08.825618 1518222 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1005 21:38:08.825861 1518222 kubeadm.go:322] [preflight] Running pre-flight checks
	I1005 21:38:08.825903 1518222 command_runner.go:130] > [preflight] Running pre-flight checks
	I1005 21:38:08.872630 1518222 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:38:08.872694 1518222 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:38:08.872812 1518222 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:38:08.872847 1518222 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:38:08.872895 1518222 kubeadm.go:322] OS: Linux
	I1005 21:38:08.872915 1518222 command_runner.go:130] > OS: Linux
	I1005 21:38:08.872993 1518222 kubeadm.go:322] CGROUPS_CPU: enabled
	I1005 21:38:08.873017 1518222 command_runner.go:130] > CGROUPS_CPU: enabled
	I1005 21:38:08.873104 1518222 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1005 21:38:08.873136 1518222 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1005 21:38:08.873218 1518222 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1005 21:38:08.873248 1518222 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1005 21:38:08.873327 1518222 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1005 21:38:08.873374 1518222 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1005 21:38:08.873456 1518222 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1005 21:38:08.873489 1518222 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1005 21:38:08.873569 1518222 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1005 21:38:08.873598 1518222 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1005 21:38:08.873674 1518222 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1005 21:38:08.873704 1518222 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1005 21:38:08.873785 1518222 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1005 21:38:08.873816 1518222 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1005 21:38:08.873896 1518222 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1005 21:38:08.873933 1518222 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1005 21:38:08.965023 1518222 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 21:38:08.965094 1518222 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1005 21:38:08.965213 1518222 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 21:38:08.965248 1518222 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1005 21:38:08.965379 1518222 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 21:38:08.965414 1518222 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1005 21:38:09.244367 1518222 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 21:38:09.247679 1518222 out.go:204]   - Generating certificates and keys ...
	I1005 21:38:09.244441 1518222 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1005 21:38:09.247957 1518222 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1005 21:38:09.247998 1518222 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1005 21:38:09.248227 1518222 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1005 21:38:09.248264 1518222 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1005 21:38:09.543810 1518222 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 21:38:09.543884 1518222 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1005 21:38:10.578345 1518222 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1005 21:38:10.578369 1518222 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1005 21:38:10.921016 1518222 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1005 21:38:10.921082 1518222 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1005 21:38:11.541842 1518222 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1005 21:38:11.541867 1518222 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1005 21:38:12.534390 1518222 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1005 21:38:12.534420 1518222 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1005 21:38:12.534720 1518222 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-814558] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 21:38:12.534737 1518222 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-814558] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 21:38:12.759993 1518222 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1005 21:38:12.760023 1518222 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1005 21:38:12.760358 1518222 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-814558] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 21:38:12.760377 1518222 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-814558] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1005 21:38:13.176704 1518222 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 21:38:13.176729 1518222 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1005 21:38:13.525025 1518222 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 21:38:13.525050 1518222 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1005 21:38:14.292042 1518222 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1005 21:38:14.292069 1518222 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1005 21:38:14.292360 1518222 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 21:38:14.292377 1518222 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1005 21:38:14.461622 1518222 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 21:38:14.461655 1518222 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1005 21:38:14.782340 1518222 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 21:38:14.782367 1518222 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1005 21:38:15.248405 1518222 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 21:38:15.248429 1518222 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1005 21:38:15.524742 1518222 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 21:38:15.524777 1518222 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1005 21:38:15.525421 1518222 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 21:38:15.525448 1518222 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1005 21:38:15.528316 1518222 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 21:38:15.530497 1518222 out.go:204]   - Booting up control plane ...
	I1005 21:38:15.528478 1518222 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1005 21:38:15.530614 1518222 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 21:38:15.530635 1518222 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1005 21:38:15.530769 1518222 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 21:38:15.530781 1518222 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1005 21:38:15.531412 1518222 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 21:38:15.531428 1518222 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1005 21:38:15.542414 1518222 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:38:15.542437 1518222 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:38:15.543380 1518222 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:38:15.543402 1518222 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:38:15.543640 1518222 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1005 21:38:15.543662 1518222 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1005 21:38:15.643478 1518222 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 21:38:15.643506 1518222 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1005 21:38:23.647121 1518222 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002955 seconds
	I1005 21:38:23.647128 1518222 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002955 seconds
	I1005 21:38:23.647271 1518222 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 21:38:23.647289 1518222 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1005 21:38:23.664518 1518222 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 21:38:23.664547 1518222 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1005 21:38:24.190698 1518222 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1005 21:38:24.190728 1518222 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1005 21:38:24.190898 1518222 kubeadm.go:322] [mark-control-plane] Marking the node multinode-814558 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 21:38:24.190908 1518222 command_runner.go:130] > [mark-control-plane] Marking the node multinode-814558 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1005 21:38:24.703710 1518222 kubeadm.go:322] [bootstrap-token] Using token: yzntrf.vyorwt9tcwy76ial
	I1005 21:38:24.705456 1518222 out.go:204]   - Configuring RBAC rules ...
	I1005 21:38:24.703821 1518222 command_runner.go:130] > [bootstrap-token] Using token: yzntrf.vyorwt9tcwy76ial
	I1005 21:38:24.705589 1518222 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 21:38:24.705612 1518222 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1005 21:38:24.712377 1518222 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 21:38:24.712404 1518222 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1005 21:38:24.722867 1518222 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 21:38:24.722893 1518222 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1005 21:38:24.727088 1518222 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 21:38:24.727108 1518222 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1005 21:38:24.730899 1518222 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 21:38:24.730919 1518222 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1005 21:38:24.734409 1518222 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 21:38:24.734433 1518222 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1005 21:38:24.747954 1518222 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 21:38:24.747978 1518222 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1005 21:38:25.015281 1518222 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1005 21:38:25.015311 1518222 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1005 21:38:25.125930 1518222 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1005 21:38:25.125955 1518222 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1005 21:38:25.127217 1518222 kubeadm.go:322] 
	I1005 21:38:25.127288 1518222 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1005 21:38:25.127301 1518222 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1005 21:38:25.127308 1518222 kubeadm.go:322] 
	I1005 21:38:25.127385 1518222 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1005 21:38:25.127394 1518222 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1005 21:38:25.127399 1518222 kubeadm.go:322] 
	I1005 21:38:25.127423 1518222 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1005 21:38:25.127433 1518222 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1005 21:38:25.127488 1518222 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 21:38:25.127496 1518222 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1005 21:38:25.127543 1518222 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 21:38:25.127551 1518222 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1005 21:38:25.127556 1518222 kubeadm.go:322] 
	I1005 21:38:25.127607 1518222 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1005 21:38:25.127618 1518222 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1005 21:38:25.127623 1518222 kubeadm.go:322] 
	I1005 21:38:25.127673 1518222 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 21:38:25.127681 1518222 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1005 21:38:25.127685 1518222 kubeadm.go:322] 
	I1005 21:38:25.127738 1518222 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1005 21:38:25.127746 1518222 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1005 21:38:25.127816 1518222 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 21:38:25.127824 1518222 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1005 21:38:25.127887 1518222 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 21:38:25.127896 1518222 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1005 21:38:25.127900 1518222 kubeadm.go:322] 
	I1005 21:38:25.127979 1518222 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1005 21:38:25.127989 1518222 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1005 21:38:25.128060 1518222 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1005 21:38:25.128068 1518222 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1005 21:38:25.128072 1518222 kubeadm.go:322] 
	I1005 21:38:25.128157 1518222 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yzntrf.vyorwt9tcwy76ial \
	I1005 21:38:25.128165 1518222 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token yzntrf.vyorwt9tcwy76ial \
	I1005 21:38:25.128261 1518222 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d \
	I1005 21:38:25.128269 1518222 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d \
	I1005 21:38:25.128289 1518222 kubeadm.go:322] 	--control-plane 
	I1005 21:38:25.128297 1518222 command_runner.go:130] > 	--control-plane 
	I1005 21:38:25.128302 1518222 kubeadm.go:322] 
	I1005 21:38:25.128381 1518222 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1005 21:38:25.128389 1518222 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1005 21:38:25.128394 1518222 kubeadm.go:322] 
	I1005 21:38:25.128470 1518222 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yzntrf.vyorwt9tcwy76ial \
	I1005 21:38:25.128477 1518222 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yzntrf.vyorwt9tcwy76ial \
	I1005 21:38:25.128572 1518222 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d 
	I1005 21:38:25.128580 1518222 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d 
	I1005 21:38:25.132708 1518222 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:38:25.132733 1518222 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:38:25.132920 1518222 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:38:25.132932 1518222 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:38:25.132968 1518222 cni.go:84] Creating CNI manager for ""
	I1005 21:38:25.132983 1518222 cni.go:136] 1 nodes found, recommending kindnet
	I1005 21:38:25.135969 1518222 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 21:38:25.137502 1518222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:38:25.153425 1518222 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1005 21:38:25.153449 1518222 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1005 21:38:25.153461 1518222 command_runner.go:130] > Device: 3ah/58d	Inode: 5453116     Links: 1
	I1005 21:38:25.153468 1518222 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 21:38:25.153475 1518222 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1005 21:38:25.153485 1518222 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1005 21:38:25.153492 1518222 command_runner.go:130] > Change: 2023-10-05 21:15:16.567757178 +0000
	I1005 21:38:25.153498 1518222 command_runner.go:130] >  Birth: 2023-10-05 21:15:16.523757341 +0000
	I1005 21:38:25.153884 1518222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 21:38:25.153899 1518222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:38:25.213174 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:38:26.091413 1518222 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1005 21:38:26.102191 1518222 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1005 21:38:26.111844 1518222 command_runner.go:130] > serviceaccount/kindnet created
	I1005 21:38:26.124439 1518222 command_runner.go:130] > daemonset.apps/kindnet created
	I1005 21:38:26.130517 1518222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:38:26.130598 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:26.130636 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53 minikube.k8s.io/name=multinode-814558 minikube.k8s.io/updated_at=2023_10_05T21_38_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:26.298396 1518222 command_runner.go:130] > node/multinode-814558 labeled
	I1005 21:38:26.302481 1518222 command_runner.go:130] > -16
	I1005 21:38:26.302511 1518222 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1005 21:38:26.302536 1518222 ops.go:34] apiserver oom_adj: -16
	I1005 21:38:26.302598 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:26.410026 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:26.410124 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:26.505221 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:27.005882 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:27.110092 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:27.505520 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:27.609226 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:28.005939 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:28.112492 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:28.506080 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:28.595901 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:29.005558 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:29.099136 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:29.505646 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:29.595146 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:30.005760 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:30.174071 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:30.505481 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:30.601452 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:31.005788 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:31.103421 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:31.506032 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:31.599274 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:32.005589 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:32.108035 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:32.505467 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:32.595204 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:33.005748 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:33.114984 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:33.505479 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:33.598199 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:34.005634 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:34.105271 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:34.505734 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:34.592708 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:35.006437 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:35.114422 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:35.505687 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:35.612162 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:36.005732 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:36.116864 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:36.506588 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:36.601734 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:37.009269 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:37.125577 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:37.505891 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:37.646598 1518222 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1005 21:38:38.006228 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1005 21:38:38.143635 1518222 command_runner.go:130] > NAME      SECRETS   AGE
	I1005 21:38:38.143657 1518222 command_runner.go:130] > default   0         1s
	I1005 21:38:38.146889 1518222 kubeadm.go:1081] duration metric: took 12.016362906s to wait for elevateKubeSystemPrivileges.
	I1005 21:38:38.146922 1518222 kubeadm.go:406] StartCluster complete in 29.452900446s
	I1005 21:38:38.146939 1518222 settings.go:142] acquiring lock: {Name:mk7dada861cf2ca4f44d224c602a8425f2d31baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:38.147002 1518222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:38:38.147735 1518222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/kubeconfig: {Name:mkcdb0cb77435bcc2d7e177116f1a594e64ff454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:38:38.148239 1518222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:38:38.148529 1518222 kapi.go:59] client config for multinode-814558: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:38:38.149729 1518222 config.go:182] Loaded profile config "multinode-814558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:38:38.149802 1518222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:38:38.149940 1518222 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 21:38:38.150031 1518222 addons.go:69] Setting storage-provisioner=true in profile "multinode-814558"
	I1005 21:38:38.150052 1518222 addons.go:231] Setting addon storage-provisioner=true in "multinode-814558"
	I1005 21:38:38.150104 1518222 host.go:66] Checking if "multinode-814558" exists ...
	I1005 21:38:38.150565 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:38:38.151023 1518222 addons.go:69] Setting default-storageclass=true in profile "multinode-814558"
	I1005 21:38:38.151045 1518222 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-814558"
	I1005 21:38:38.151313 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:38:38.151519 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 21:38:38.151534 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:38.151543 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:38.151550 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:38.151778 1518222 cert_rotation.go:137] Starting client certificate rotation controller
	I1005 21:38:38.200822 1518222 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1005 21:38:38.206997 1518222 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:38:38.207016 1518222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1005 21:38:38.207070 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:38.206209 1518222 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I1005 21:38:38.207983 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:38.208001 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:38.208009 1518222 round_trippers.go:580]     Content-Length: 291
	I1005 21:38:38.208016 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:38 GMT
	I1005 21:38:38.208026 1518222 round_trippers.go:580]     Audit-Id: 8d192b2e-b7ce-4b97-b83a-98a2dd36f7e2
	I1005 21:38:38.208033 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:38.208040 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:38.208050 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:38.208077 1518222 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5545a75a-e1ab-458a-8428-11a477671681","resourceVersion":"348","creationTimestamp":"2023-10-05T21:38:24Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1005 21:38:38.208473 1518222 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5545a75a-e1ab-458a-8428-11a477671681","resourceVersion":"348","creationTimestamp":"2023-10-05T21:38:24Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1005 21:38:38.208531 1518222 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 21:38:38.208544 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:38.208551 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:38.208562 1518222 round_trippers.go:473]     Content-Type: application/json
	I1005 21:38:38.208569 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:38.206884 1518222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:38:38.209005 1518222 kapi.go:59] client config for multinode-814558: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:38:38.209293 1518222 addons.go:231] Setting addon default-storageclass=true in "multinode-814558"
	I1005 21:38:38.209328 1518222 host.go:66] Checking if "multinode-814558" exists ...
	I1005 21:38:38.209786 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:38:38.226254 1518222 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1005 21:38:38.226275 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:38.226287 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:38 GMT
	I1005 21:38:38.226294 1518222 round_trippers.go:580]     Audit-Id: 5d359232-9ba8-4106-bb65-6c6df2a05594
	I1005 21:38:38.226300 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:38.226306 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:38.226312 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:38.226318 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:38.226326 1518222 round_trippers.go:580]     Content-Length: 291
	I1005 21:38:38.226349 1518222 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5545a75a-e1ab-458a-8428-11a477671681","resourceVersion":"353","creationTimestamp":"2023-10-05T21:38:24Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1005 21:38:38.226490 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 21:38:38.226496 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:38.226504 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:38.226511 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:38.246842 1518222 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1005 21:38:38.246864 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:38.246874 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:38 GMT
	I1005 21:38:38.246881 1518222 round_trippers.go:580]     Audit-Id: 426f8f24-60df-4850-88a3-5d3c47839ebe
	I1005 21:38:38.246887 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:38.246893 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:38.246902 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:38.246909 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:38.246915 1518222 round_trippers.go:580]     Content-Length: 291
	I1005 21:38:38.246939 1518222 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5545a75a-e1ab-458a-8428-11a477671681","resourceVersion":"353","creationTimestamp":"2023-10-05T21:38:24Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1005 21:38:38.247033 1518222 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-814558" context rescaled to 1 replicas
	I1005 21:38:38.247058 1518222 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:38:38.249123 1518222 out.go:177] * Verifying Kubernetes components...
	I1005 21:38:38.247448 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:38:38.254304 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:38:38.258737 1518222 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1005 21:38:38.258762 1518222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1005 21:38:38.258825 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:38:38.303349 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:38:38.394551 1518222 command_runner.go:130] > apiVersion: v1
	I1005 21:38:38.394582 1518222 command_runner.go:130] > data:
	I1005 21:38:38.394589 1518222 command_runner.go:130] >   Corefile: |
	I1005 21:38:38.394594 1518222 command_runner.go:130] >     .:53 {
	I1005 21:38:38.394598 1518222 command_runner.go:130] >         errors
	I1005 21:38:38.394604 1518222 command_runner.go:130] >         health {
	I1005 21:38:38.394610 1518222 command_runner.go:130] >            lameduck 5s
	I1005 21:38:38.394615 1518222 command_runner.go:130] >         }
	I1005 21:38:38.394619 1518222 command_runner.go:130] >         ready
	I1005 21:38:38.394630 1518222 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1005 21:38:38.394640 1518222 command_runner.go:130] >            pods insecure
	I1005 21:38:38.394653 1518222 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1005 21:38:38.394663 1518222 command_runner.go:130] >            ttl 30
	I1005 21:38:38.394669 1518222 command_runner.go:130] >         }
	I1005 21:38:38.394678 1518222 command_runner.go:130] >         prometheus :9153
	I1005 21:38:38.394684 1518222 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1005 21:38:38.394693 1518222 command_runner.go:130] >            max_concurrent 1000
	I1005 21:38:38.394698 1518222 command_runner.go:130] >         }
	I1005 21:38:38.394708 1518222 command_runner.go:130] >         cache 30
	I1005 21:38:38.394713 1518222 command_runner.go:130] >         loop
	I1005 21:38:38.394718 1518222 command_runner.go:130] >         reload
	I1005 21:38:38.394734 1518222 command_runner.go:130] >         loadbalance
	I1005 21:38:38.394743 1518222 command_runner.go:130] >     }
	I1005 21:38:38.394748 1518222 command_runner.go:130] > kind: ConfigMap
	I1005 21:38:38.394756 1518222 command_runner.go:130] > metadata:
	I1005 21:38:38.394765 1518222 command_runner.go:130] >   creationTimestamp: "2023-10-05T21:38:24Z"
	I1005 21:38:38.394774 1518222 command_runner.go:130] >   name: coredns
	I1005 21:38:38.394779 1518222 command_runner.go:130] >   namespace: kube-system
	I1005 21:38:38.394788 1518222 command_runner.go:130] >   resourceVersion: "230"
	I1005 21:38:38.394794 1518222 command_runner.go:130] >   uid: 1957ea5e-d840-4035-894a-be7afcb0181f
	I1005 21:38:38.398029 1518222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1005 21:38:38.398502 1518222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:38:38.398800 1518222 kapi.go:59] client config for multinode-814558: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:38:38.399102 1518222 node_ready.go:35] waiting up to 6m0s for node "multinode-814558" to be "Ready" ...
	I1005 21:38:38.399190 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:38.399202 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:38.399211 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:38.399218 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:38.435228 1518222 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I1005 21:38:38.435257 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:38.435267 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:38 GMT
	I1005 21:38:38.435274 1518222 round_trippers.go:580]     Audit-Id: 684ae932-3755-40f5-beb4-3cf22ed98c7e
	I1005 21:38:38.435286 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:38.435294 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:38.435300 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:38.435306 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:38.435474 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:38.436267 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:38.436286 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:38.436294 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:38.436301 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:38.445722 1518222 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1005 21:38:38.445747 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:38.445756 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:38.445762 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:38.445769 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:38 GMT
	I1005 21:38:38.445776 1518222 round_trippers.go:580]     Audit-Id: ed11442f-db7e-444d-b58d-9ba634146394
	I1005 21:38:38.445782 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:38.445792 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:38.445906 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:38.507050 1518222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1005 21:38:38.526007 1518222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1005 21:38:38.893673 1518222 command_runner.go:130] > configmap/coredns replaced
	I1005 21:38:38.899146 1518222 start.go:923] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1005 21:38:38.946872 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:38.946896 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:38.946915 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:38.946923 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:38.962324 1518222 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1005 21:38:38.962352 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:38.962361 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:38 GMT
	I1005 21:38:38.962376 1518222 round_trippers.go:580]     Audit-Id: 6f29bbb3-0037-45a1-ada5-fff56a7ecf85
	I1005 21:38:38.962383 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:38.962389 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:38.962396 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:38.962403 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:38.962522 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:39.189591 1518222 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1005 21:38:39.196857 1518222 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1005 21:38:39.207812 1518222 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1005 21:38:39.216239 1518222 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1005 21:38:39.224602 1518222 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1005 21:38:39.238049 1518222 command_runner.go:130] > pod/storage-provisioner created
	I1005 21:38:39.243482 1518222 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1005 21:38:39.243598 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1005 21:38:39.243618 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:39.243627 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:39.243638 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:39.247478 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:38:39.247501 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:39.247510 1518222 round_trippers.go:580]     Content-Length: 1273
	I1005 21:38:39.247530 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:39 GMT
	I1005 21:38:39.247537 1518222 round_trippers.go:580]     Audit-Id: 86ee14b6-c85d-4909-96ee-0a5b794a389c
	I1005 21:38:39.247546 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:39.247553 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:39.247563 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:39.247569 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:39.247883 1518222 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"376"},"items":[{"metadata":{"name":"standard","uid":"83bac768-fc22-4683-8c9e-edc5c56c7ab9","resourceVersion":"370","creationTimestamp":"2023-10-05T21:38:38Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-05T21:38:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1005 21:38:39.248329 1518222 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"83bac768-fc22-4683-8c9e-edc5c56c7ab9","resourceVersion":"370","creationTimestamp":"2023-10-05T21:38:38Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-05T21:38:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1005 21:38:39.248388 1518222 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1005 21:38:39.248400 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:39.248409 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:39.248416 1518222 round_trippers.go:473]     Content-Type: application/json
	I1005 21:38:39.248426 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:39.253021 1518222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1005 21:38:39.253042 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:39.253051 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:39.253058 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:39.253064 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:39.253071 1518222 round_trippers.go:580]     Content-Length: 1220
	I1005 21:38:39.253077 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:39 GMT
	I1005 21:38:39.253084 1518222 round_trippers.go:580]     Audit-Id: de60453c-1977-438b-90fb-7bbe8dae2b3b
	I1005 21:38:39.253090 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:39.253171 1518222 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"83bac768-fc22-4683-8c9e-edc5c56c7ab9","resourceVersion":"370","creationTimestamp":"2023-10-05T21:38:38Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-05T21:38:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1005 21:38:39.255457 1518222 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1005 21:38:39.257266 1518222 addons.go:502] enable addons completed in 1.107313748s: enabled=[storage-provisioner default-storageclass]
	I1005 21:38:39.447046 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:39.447073 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:39.447084 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:39.447092 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:39.449711 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:39.449738 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:39.449748 1518222 round_trippers.go:580]     Audit-Id: 6ca3a5b0-ff48-4239-a6e0-f5b786546e60
	I1005 21:38:39.449755 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:39.449762 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:39.449768 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:39.449776 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:39.449784 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:39 GMT
	I1005 21:38:39.450041 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:39.946543 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:39.946601 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:39.946619 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:39.946628 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:39.949258 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:39.949324 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:39.949366 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:39.949417 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:39.949431 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:39.949438 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:39 GMT
	I1005 21:38:39.949444 1518222 round_trippers.go:580]     Audit-Id: c1da9573-5328-4b18-a1c0-b5d9234ebf0a
	I1005 21:38:39.949451 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:39.949575 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:40.447133 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:40.447159 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:40.447168 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:40.447176 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:40.449955 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:40.450021 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:40.450044 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:40 GMT
	I1005 21:38:40.450068 1518222 round_trippers.go:580]     Audit-Id: dd0eeca6-9ded-455c-bc60-ba59abd49224
	I1005 21:38:40.450100 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:40.450127 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:40.450147 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:40.450170 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:40.450316 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:40.450757 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:40.946554 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:40.946580 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:40.946590 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:40.946598 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:40.949091 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:40.949117 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:40.949126 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:40.949142 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:40.949149 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:40.949155 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:40 GMT
	I1005 21:38:40.949162 1518222 round_trippers.go:580]     Audit-Id: 9645fd7b-f366-4d14-a02a-3e102a8359cc
	I1005 21:38:40.949168 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:40.949300 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:41.447471 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:41.447496 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:41.447506 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:41.447514 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:41.450279 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:41.450304 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:41.450313 1518222 round_trippers.go:580]     Audit-Id: 2588ed84-e803-4b5a-ac0e-31cb10de729f
	I1005 21:38:41.450320 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:41.450327 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:41.450333 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:41.450340 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:41.450350 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:41 GMT
	I1005 21:38:41.450549 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:41.947457 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:41.947482 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:41.947493 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:41.947500 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:41.950113 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:41.950174 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:41.950195 1518222 round_trippers.go:580]     Audit-Id: 9c8a1d07-0dee-4089-b897-0735ca0b7904
	I1005 21:38:41.950217 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:41.950250 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:41.950272 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:41.950293 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:41.950314 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:41 GMT
	I1005 21:38:41.950456 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:42.446552 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:42.446578 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:42.446588 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:42.446597 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:42.449323 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:42.449369 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:42.449379 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:42.449385 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:42.449392 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:42.449398 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:42.449405 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:42 GMT
	I1005 21:38:42.449411 1518222 round_trippers.go:580]     Audit-Id: 6addee07-0440-48e9-9ef6-9fbc0dbde4bf
	I1005 21:38:42.449644 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:42.946658 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:42.946682 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:42.946696 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:42.946708 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:42.949402 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:42.949465 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:42.949490 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:42.949513 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:42.949556 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:42 GMT
	I1005 21:38:42.949582 1518222 round_trippers.go:580]     Audit-Id: 05074f2b-2c6e-4aad-a56e-d9721d8d34e1
	I1005 21:38:42.949604 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:42.949627 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:42.949815 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:42.950251 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:43.446971 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:43.446994 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:43.447004 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:43.447014 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:43.449515 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:43.449541 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:43.449550 1518222 round_trippers.go:580]     Audit-Id: 78031c95-9e4b-4684-82e7-721a2753e570
	I1005 21:38:43.449557 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:43.449564 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:43.449570 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:43.449577 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:43.449583 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:43 GMT
	I1005 21:38:43.449817 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:43.947208 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:43.947233 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:43.947243 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:43.947250 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:43.949874 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:43.949944 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:43.949967 1518222 round_trippers.go:580]     Audit-Id: 22f644c6-dcd3-4086-9fba-624921645ccf
	I1005 21:38:43.949988 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:43.950014 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:43.950034 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:43.950047 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:43.950056 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:43 GMT
	I1005 21:38:43.950174 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:44.446600 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:44.446624 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:44.446651 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:44.446663 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:44.450688 1518222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1005 21:38:44.450763 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:44.450785 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:44.450808 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:44.450836 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:44.450844 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:44 GMT
	I1005 21:38:44.450851 1518222 round_trippers.go:580]     Audit-Id: 3e1cbab1-6558-473b-b351-6de90ce80721
	I1005 21:38:44.450857 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:44.451013 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:44.947271 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:44.947323 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:44.947339 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:44.947352 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:44.950304 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:44.950342 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:44.950351 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:44 GMT
	I1005 21:38:44.950357 1518222 round_trippers.go:580]     Audit-Id: ec7ee8f5-1ce9-438e-ae31-28627d84f755
	I1005 21:38:44.950364 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:44.950370 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:44.950376 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:44.950383 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:44.950550 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:44.950962 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:45.447299 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:45.447324 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:45.447336 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:45.447343 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:45.450546 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:38:45.450573 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:45.450583 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:45.450590 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:45.450602 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:45 GMT
	I1005 21:38:45.450609 1518222 round_trippers.go:580]     Audit-Id: 3a594ea1-950a-448f-a29d-3e9a79e75e9e
	I1005 21:38:45.450615 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:45.450626 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:45.451125 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:45.946663 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:45.946689 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:45.946712 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:45.946720 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:45.949378 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:45.949402 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:45.949411 1518222 round_trippers.go:580]     Audit-Id: d161482e-a078-4e00-8ab3-d35d63cc286b
	I1005 21:38:45.949418 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:45.949424 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:45.949430 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:45.949436 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:45.949442 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:45 GMT
	I1005 21:38:45.949583 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:46.446735 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:46.446760 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:46.446770 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:46.446777 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:46.449998 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:38:46.450024 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:46.450032 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:46.450080 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:46.450092 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:46.450100 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:46 GMT
	I1005 21:38:46.450113 1518222 round_trippers.go:580]     Audit-Id: 8a86761e-7985-4c60-935c-d61352c031a4
	I1005 21:38:46.450120 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:46.450242 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:46.946585 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:46.946611 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:46.946622 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:46.946629 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:46.949158 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:46.949185 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:46.949195 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:46.949202 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:46 GMT
	I1005 21:38:46.949209 1518222 round_trippers.go:580]     Audit-Id: 7a628fb7-cc43-4af0-b511-5d7a71dcfeac
	I1005 21:38:46.949215 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:46.949221 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:46.949232 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:46.949377 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:47.446500 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:47.446525 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:47.446534 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:47.446541 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:47.449407 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:47.449429 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:47.449437 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:47.449443 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:47.449450 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:47.449456 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:47.449462 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:47 GMT
	I1005 21:38:47.449469 1518222 round_trippers.go:580]     Audit-Id: 8c37bf60-bdc2-45ee-b9fe-b88ca2cb1e0b
	I1005 21:38:47.450040 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:47.450507 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:47.946683 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:47.946706 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:47.946715 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:47.946723 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:47.949362 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:47.949388 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:47.949396 1518222 round_trippers.go:580]     Audit-Id: 165eb4b8-c2d8-43f9-ac9b-a543bf927309
	I1005 21:38:47.949403 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:47.949411 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:47.949417 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:47.949423 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:47.949429 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:47 GMT
	I1005 21:38:47.949531 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:48.446936 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:48.446965 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:48.446975 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:48.446982 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:48.449449 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:48.449470 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:48.449479 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:48 GMT
	I1005 21:38:48.449486 1518222 round_trippers.go:580]     Audit-Id: 7a48224d-0130-4f96-836b-dc4d16e8a058
	I1005 21:38:48.449492 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:48.449498 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:48.449505 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:48.449514 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:48.449911 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:48.946716 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:48.946742 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:48.946752 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:48.946760 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:48.949292 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:48.949317 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:48.949326 1518222 round_trippers.go:580]     Audit-Id: 44ea2879-42a6-4739-a2de-f9369d86bf4a
	I1005 21:38:48.949350 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:48.949358 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:48.949364 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:48.949375 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:48.949382 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:48 GMT
	I1005 21:38:48.949580 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:49.446568 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:49.446594 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:49.446604 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:49.446611 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:49.449658 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:38:49.449682 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:49.449691 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:49.449698 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:49 GMT
	I1005 21:38:49.449705 1518222 round_trippers.go:580]     Audit-Id: 5a014e2e-d9fd-41bb-9f28-79323f1dcd76
	I1005 21:38:49.449711 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:49.449718 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:49.449727 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:49.450196 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:49.450599 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:49.947283 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:49.947305 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:49.947314 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:49.947322 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:49.950016 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:49.950044 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:49.950053 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:49 GMT
	I1005 21:38:49.950060 1518222 round_trippers.go:580]     Audit-Id: b37d7157-5c95-4fba-aa52-1da9b8404a36
	I1005 21:38:49.950066 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:49.950074 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:49.950080 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:49.950086 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:49.950200 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:50.447329 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:50.447356 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:50.447366 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:50.447373 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:50.449951 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:50.449971 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:50.449980 1518222 round_trippers.go:580]     Audit-Id: 84723b2d-a37c-49c5-a3f7-f67a7bbe3af7
	I1005 21:38:50.449986 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:50.449993 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:50.449999 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:50.450005 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:50.450011 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:50 GMT
	I1005 21:38:50.450154 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:50.947266 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:50.947288 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:50.947298 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:50.947306 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:50.949897 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:50.949994 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:50.950010 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:50 GMT
	I1005 21:38:50.950017 1518222 round_trippers.go:580]     Audit-Id: e98c641f-9b80-4271-9311-64eb766915a5
	I1005 21:38:50.950024 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:50.950030 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:50.950039 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:50.950050 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:50.950170 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:51.446528 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:51.446553 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:51.446568 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:51.446580 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:51.449277 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:51.449314 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:51.449324 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:51 GMT
	I1005 21:38:51.449330 1518222 round_trippers.go:580]     Audit-Id: 9bf8c1df-4004-4e9c-bb63-53c784468c50
	I1005 21:38:51.449397 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:51.449405 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:51.449417 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:51.449423 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:51.449561 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:51.946729 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:51.946755 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:51.946764 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:51.946771 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:51.949426 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:51.949448 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:51.949458 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:51.949464 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:51.949470 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:51.949476 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:51.949483 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:51 GMT
	I1005 21:38:51.949489 1518222 round_trippers.go:580]     Audit-Id: 490e907f-babb-44e1-8889-d166c2dd255d
	I1005 21:38:51.949602 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:51.949997 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:52.446739 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:52.446765 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:52.446775 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:52.446782 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:52.449421 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:52.449442 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:52.449451 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:52.449457 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:52.449464 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:52 GMT
	I1005 21:38:52.449470 1518222 round_trippers.go:580]     Audit-Id: b1a9635b-fa1f-48f8-bd2b-74f686145361
	I1005 21:38:52.449476 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:52.449483 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:52.449638 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:52.947303 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:52.947328 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:52.947338 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:52.947345 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:52.949957 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:52.949978 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:52.949987 1518222 round_trippers.go:580]     Audit-Id: 87fd6271-6e45-427d-9d82-4a6b5b2eeaec
	I1005 21:38:52.949994 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:52.950001 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:52.950007 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:52.950013 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:52.950019 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:52 GMT
	I1005 21:38:52.950178 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:53.446666 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:53.446693 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:53.446703 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:53.446710 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:53.449231 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:53.449260 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:53.449269 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:53.449276 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:53 GMT
	I1005 21:38:53.449283 1518222 round_trippers.go:580]     Audit-Id: 8974c9ab-46c5-4513-9c7f-ad721c4c78c8
	I1005 21:38:53.449291 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:53.449297 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:53.449304 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:53.449459 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:53.946986 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:53.947022 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:53.947032 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:53.947039 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:53.949637 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:53.949661 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:53.949671 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:53.949678 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:53 GMT
	I1005 21:38:53.949684 1518222 round_trippers.go:580]     Audit-Id: ee6935a8-5e6f-44e0-91f9-7920c35dc28a
	I1005 21:38:53.949690 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:53.949697 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:53.949703 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:53.949823 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:53.950220 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:54.446564 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:54.446604 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:54.446615 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:54.446623 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:54.449184 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:54.449208 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:54.449217 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:54.449224 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:54.449230 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:54.449236 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:54.449243 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:54 GMT
	I1005 21:38:54.449249 1518222 round_trippers.go:580]     Audit-Id: accbe971-722e-4deb-84be-34889308a213
	I1005 21:38:54.449581 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:54.947277 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:54.947302 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:54.947313 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:54.947321 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:54.949877 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:54.949898 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:54.949907 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:54.949916 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:54 GMT
	I1005 21:38:54.949922 1518222 round_trippers.go:580]     Audit-Id: b4119ee8-3e43-41b5-9ffa-488b6e7b63dd
	I1005 21:38:54.949928 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:54.949934 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:54.949940 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:54.950054 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:55.447210 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:55.447235 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:55.447245 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:55.447252 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:55.449920 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:55.449946 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:55.449957 1518222 round_trippers.go:580]     Audit-Id: 699ef2a8-8b6b-4b4d-833c-8157b78d3f58
	I1005 21:38:55.449963 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:55.449970 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:55.449976 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:55.449985 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:55.449992 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:55 GMT
	I1005 21:38:55.450217 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:55.946972 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:55.946995 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:55.947004 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:55.947012 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:55.949753 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:55.949787 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:55.949796 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:55.949803 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:55.949809 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:55.949824 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:55.949835 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:55 GMT
	I1005 21:38:55.949841 1518222 round_trippers.go:580]     Audit-Id: e4fbf7b7-29f7-46f3-bbda-52bfc10b0b99
	I1005 21:38:55.949985 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:55.950447 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:56.447198 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:56.447226 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:56.447236 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:56.447243 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:56.449747 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:56.449768 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:56.449776 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:56.449783 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:56.449790 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:56 GMT
	I1005 21:38:56.449796 1518222 round_trippers.go:580]     Audit-Id: c7b134b8-a6cc-4823-a12e-ff85d9fbb988
	I1005 21:38:56.449802 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:56.449808 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:56.449930 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:56.946501 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:56.946527 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:56.946537 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:56.946545 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:56.949136 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:56.949156 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:56.949165 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:56 GMT
	I1005 21:38:56.949171 1518222 round_trippers.go:580]     Audit-Id: 621c5ccf-d9fc-4238-9a86-66d37f5f9e47
	I1005 21:38:56.949177 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:56.949185 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:56.949191 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:56.949198 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:56.949397 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:57.446520 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:57.446547 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:57.446557 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:57.446564 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:57.449328 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:57.449378 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:57.449391 1518222 round_trippers.go:580]     Audit-Id: b4700d2c-05d4-4965-a82f-4b26bbf68e5f
	I1005 21:38:57.449399 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:57.449405 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:57.449411 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:57.449417 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:57.449424 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:57 GMT
	I1005 21:38:57.449620 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:57.946573 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:57.946601 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:57.946610 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:57.946618 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:57.949243 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:57.949276 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:57.949290 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:57.949299 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:57.949306 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:57.949312 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:57 GMT
	I1005 21:38:57.949322 1518222 round_trippers.go:580]     Audit-Id: e5464f32-745f-4e59-90af-65d7dc84cb50
	I1005 21:38:57.949329 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:57.949461 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:58.446545 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:58.446571 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:58.446581 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:58.446588 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:58.449051 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:58.449075 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:58.449085 1518222 round_trippers.go:580]     Audit-Id: 6d6828c8-639a-45f6-8c44-844a7d5957a5
	I1005 21:38:58.449091 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:58.449097 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:58.449104 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:58.449110 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:58.449124 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:58 GMT
	I1005 21:38:58.449561 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:58.449979 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:38:58.946552 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:58.946575 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:58.946584 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:58.946591 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:58.949131 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:58.949152 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:58.949160 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:58.949167 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:58.949173 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:58.949179 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:58 GMT
	I1005 21:38:58.949185 1518222 round_trippers.go:580]     Audit-Id: 02f41eb6-c596-4dac-9108-a04e4577e8a8
	I1005 21:38:58.949193 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:58.949379 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:59.446500 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:59.446527 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:59.446537 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:59.446544 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:59.449661 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:38:59.449689 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:59.449698 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:59 GMT
	I1005 21:38:59.449705 1518222 round_trippers.go:580]     Audit-Id: 05c3908e-cd28-40c3-a002-0072e2248235
	I1005 21:38:59.449711 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:59.449718 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:59.449724 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:59.449731 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:59.449972 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:38:59.946901 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:38:59.946928 1518222 round_trippers.go:469] Request Headers:
	I1005 21:38:59.946939 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:38:59.946946 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:38:59.949461 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:38:59.949485 1518222 round_trippers.go:577] Response Headers:
	I1005 21:38:59.949494 1518222 round_trippers.go:580]     Audit-Id: 62fa06d4-9e2c-4ae2-9413-adad46450da6
	I1005 21:38:59.949500 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:38:59.949507 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:38:59.949513 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:38:59.949520 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:38:59.949531 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:38:59 GMT
	I1005 21:38:59.949801 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:00.446583 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:00.446610 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:00.446620 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:00.446627 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:00.449772 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:00.449814 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:00.449824 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:00 GMT
	I1005 21:39:00.449831 1518222 round_trippers.go:580]     Audit-Id: b51aaa13-1fbe-4f19-bdee-9a5bcc844e82
	I1005 21:39:00.449842 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:00.449849 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:00.449860 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:00.449867 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:00.450402 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:00.450847 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:39:00.947123 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:00.947149 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:00.947162 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:00.947170 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:00.949756 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:00.949794 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:00.949817 1518222 round_trippers.go:580]     Audit-Id: 1b417c45-26d6-4654-b5d6-3961e55c6a59
	I1005 21:39:00.949829 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:00.949836 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:00.949848 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:00.949855 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:00.949866 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:00 GMT
	I1005 21:39:00.950296 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:01.447025 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:01.447059 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:01.447071 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:01.447081 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:01.450148 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:01.450183 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:01.450202 1518222 round_trippers.go:580]     Audit-Id: b4093451-8d34-42dc-9209-8a729ac46d06
	I1005 21:39:01.450210 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:01.450221 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:01.450229 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:01.450238 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:01.450245 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:01 GMT
	I1005 21:39:01.450579 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:01.947175 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:01.947201 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:01.947211 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:01.947218 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:01.949847 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:01.949874 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:01.949883 1518222 round_trippers.go:580]     Audit-Id: 86715ae6-02a7-4eee-9531-f92cfa7d1dfc
	I1005 21:39:01.949890 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:01.949896 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:01.949902 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:01.949908 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:01.949915 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:01 GMT
	I1005 21:39:01.950246 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:02.446877 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:02.446904 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:02.446914 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:02.446921 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:02.449549 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:02.449571 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:02.449580 1518222 round_trippers.go:580]     Audit-Id: cd225aff-5292-421b-b91a-ab4c8d93339f
	I1005 21:39:02.449587 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:02.449593 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:02.449599 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:02.449605 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:02.449613 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:02 GMT
	I1005 21:39:02.449874 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:02.947134 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:02.947159 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:02.947170 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:02.947177 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:02.949912 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:02.949938 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:02.949948 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:02 GMT
	I1005 21:39:02.949955 1518222 round_trippers.go:580]     Audit-Id: 7bd8bafb-bbf1-4862-869a-421e93fb056a
	I1005 21:39:02.949963 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:02.949969 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:02.949975 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:02.949983 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:02.950238 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:02.950677 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:39:03.447165 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:03.447187 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:03.447196 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:03.447203 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:03.449904 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:03.449929 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:03.449939 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:03.449946 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:03.449952 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:03.449958 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:03.449965 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:03 GMT
	I1005 21:39:03.449978 1518222 round_trippers.go:580]     Audit-Id: 5da3b44c-edd7-4c64-96f8-4f6b49d3312b
	I1005 21:39:03.450169 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:03.946558 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:03.946590 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:03.946603 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:03.946611 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:03.949280 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:03.949303 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:03.949312 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:03.949319 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:03 GMT
	I1005 21:39:03.949325 1518222 round_trippers.go:580]     Audit-Id: 6c4ab8ca-137d-4cc2-a677-fa24b47231a2
	I1005 21:39:03.949354 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:03.949363 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:03.949369 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:03.949479 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:04.446562 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:04.446587 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:04.446596 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:04.446603 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:04.449191 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:04.449221 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:04.449230 1518222 round_trippers.go:580]     Audit-Id: c2b153d3-9a28-475b-8a9a-f63cc4875b3d
	I1005 21:39:04.449237 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:04.449243 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:04.449249 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:04.449256 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:04.449266 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:04 GMT
	I1005 21:39:04.449401 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:04.946527 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:04.946555 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:04.946565 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:04.946573 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:04.949220 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:04.949240 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:04.949248 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:04.949254 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:04.949261 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:04.949268 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:04 GMT
	I1005 21:39:04.949274 1518222 round_trippers.go:580]     Audit-Id: 16aa3134-8e32-4f21-95c7-3423beea049b
	I1005 21:39:04.949280 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:04.949794 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:05.446510 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:05.446535 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:05.446546 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:05.446553 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:05.449099 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:05.449122 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:05.449140 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:05 GMT
	I1005 21:39:05.449147 1518222 round_trippers.go:580]     Audit-Id: 1aa843c8-6a2b-48c1-adc1-075c31ec5a95
	I1005 21:39:05.449153 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:05.449160 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:05.449166 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:05.449172 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:05.449358 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:05.449761 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:39:05.946520 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:05.946544 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:05.946553 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:05.946561 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:05.949358 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:05.949385 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:05.949394 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:05.949400 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:05.949407 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:05.949414 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:05 GMT
	I1005 21:39:05.949421 1518222 round_trippers.go:580]     Audit-Id: a5b6f8b3-029b-4767-b540-cc9eef50b664
	I1005 21:39:05.949430 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:05.949804 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:06.446568 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:06.446598 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:06.446608 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:06.446615 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:06.449316 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:06.449361 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:06.449371 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:06.449377 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:06 GMT
	I1005 21:39:06.449384 1518222 round_trippers.go:580]     Audit-Id: 7f29afbd-e477-4ede-afa4-d59c46c978da
	I1005 21:39:06.449390 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:06.449396 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:06.449408 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:06.449679 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:06.947415 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:06.947441 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:06.947452 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:06.947460 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:06.950049 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:06.950122 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:06.950164 1518222 round_trippers.go:580]     Audit-Id: 762d914c-5898-4d4e-95e9-69bc76a5efe5
	I1005 21:39:06.950179 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:06.950200 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:06.950212 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:06.950219 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:06.950229 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:06 GMT
	I1005 21:39:06.950356 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:07.446942 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:07.446968 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:07.446978 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:07.446985 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:07.449715 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:07.449745 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:07.449756 1518222 round_trippers.go:580]     Audit-Id: ad9282d3-be90-435c-ada2-1ed512590bef
	I1005 21:39:07.449762 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:07.449770 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:07.449776 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:07.449782 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:07.449790 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:07 GMT
	I1005 21:39:07.449920 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:07.450343 1518222 node_ready.go:58] node "multinode-814558" has status "Ready":"False"
	I1005 21:39:07.947160 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:07.947185 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:07.947198 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:07.947206 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:07.949798 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:07.949823 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:07.949832 1518222 round_trippers.go:580]     Audit-Id: 4f274879-0de2-41b2-aa46-49ab576c4c25
	I1005 21:39:07.949838 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:07.949844 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:07.949850 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:07.949856 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:07.949863 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:07 GMT
	I1005 21:39:07.949981 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:08.446827 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:08.446852 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:08.446862 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:08.446869 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:08.449544 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:08.449617 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:08.449640 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:08.449653 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:08.449661 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:08 GMT
	I1005 21:39:08.449667 1518222 round_trippers.go:580]     Audit-Id: 8dbf779f-44af-433f-ba72-6f4c361ae2a6
	I1005 21:39:08.449687 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:08.449702 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:08.449850 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"319","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1005 21:39:08.947527 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:08.947553 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:08.947563 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:08.947571 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:08.950183 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:08.950205 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:08.950213 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:08.950220 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:08.950227 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:08 GMT
	I1005 21:39:08.950233 1518222 round_trippers.go:580]     Audit-Id: 884601c4-b471-46b3-8c8e-4d64e52d540c
	I1005 21:39:08.950239 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:08.950245 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:08.950344 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:08.950720 1518222 node_ready.go:49] node "multinode-814558" has status "Ready":"True"
	I1005 21:39:08.950731 1518222 node_ready.go:38] duration metric: took 30.551609295s waiting for node "multinode-814558" to be "Ready" ...
	I1005 21:39:08.950741 1518222 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:39:08.950834 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:39:08.950841 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:08.950848 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:08.950856 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:08.954488 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:08.954515 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:08.954525 1518222 round_trippers.go:580]     Audit-Id: 51fd58c3-c506-4c66-92b6-04af47e10970
	I1005 21:39:08.954531 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:08.954538 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:08.954545 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:08.954552 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:08.954559 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:08 GMT
	I1005 21:39:08.954996 1518222 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"396","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1005 21:39:08.959037 1518222 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6bvj5" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:08.959136 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6bvj5
	I1005 21:39:08.959148 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:08.959157 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:08.959165 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:08.961901 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:08.961934 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:08.961942 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:08.961949 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:08.961956 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:08.961962 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:08.961972 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:08 GMT
	I1005 21:39:08.961978 1518222 round_trippers.go:580]     Audit-Id: a8aa17f5-9a56-49a6-af18-b8128730e3a1
	I1005 21:39:08.962090 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"396","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1005 21:39:08.962684 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:08.962707 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:08.962716 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:08.962723 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:08.965124 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:08.965150 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:08.965159 1518222 round_trippers.go:580]     Audit-Id: 9506c5fc-48b8-4f2b-8a40-70480a08e005
	I1005 21:39:08.965165 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:08.965172 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:08.965179 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:08.965194 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:08.965202 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:08 GMT
	I1005 21:39:08.965569 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:08.966023 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6bvj5
	I1005 21:39:08.966039 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:08.966048 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:08.966056 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:08.968549 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:08.968573 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:08.968585 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:08.968592 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:08 GMT
	I1005 21:39:08.968599 1518222 round_trippers.go:580]     Audit-Id: 9b3de6f2-4b95-4443-bfcc-6649295103f0
	I1005 21:39:08.968605 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:08.968611 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:08.968620 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:08.969009 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"396","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1005 21:39:08.969575 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:08.969593 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:08.969601 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:08.969608 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:08.971911 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:08.971934 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:08.971943 1518222 round_trippers.go:580]     Audit-Id: 5e3e7519-106a-4803-aaa8-b4dabe4a487e
	I1005 21:39:08.971950 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:08.971957 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:08.971963 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:08.971977 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:08.971988 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:08 GMT
	I1005 21:39:08.972104 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:09.472807 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6bvj5
	I1005 21:39:09.472846 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:09.472857 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:09.472864 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:09.476973 1518222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1005 21:39:09.477008 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:09.477017 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:09.477024 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:09 GMT
	I1005 21:39:09.477031 1518222 round_trippers.go:580]     Audit-Id: d9114c8d-1c8a-4d77-8276-955d3e74effb
	I1005 21:39:09.477037 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:09.477044 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:09.477051 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:09.477189 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"396","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1005 21:39:09.477782 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:09.477799 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:09.477815 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:09.477822 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:09.481130 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:09.481175 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:09.481184 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:09.481191 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:09.481197 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:09 GMT
	I1005 21:39:09.481203 1518222 round_trippers.go:580]     Audit-Id: e8f5f3d8-80df-4ce2-9bce-31f5846af9ba
	I1005 21:39:09.481216 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:09.481225 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:09.481382 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:09.972801 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6bvj5
	I1005 21:39:09.972829 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:09.972840 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:09.972847 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:09.975448 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:09.975473 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:09.975481 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:09.975488 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:09 GMT
	I1005 21:39:09.975494 1518222 round_trippers.go:580]     Audit-Id: d75f79f0-501b-450d-83ef-c3e1268dca00
	I1005 21:39:09.975503 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:09.975509 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:09.975520 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:09.975635 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"396","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1005 21:39:09.976168 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:09.976184 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:09.976193 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:09.976200 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:09.978688 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:09.978758 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:09.978775 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:09.978783 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:09.978789 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:09.978796 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:09.978807 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:09 GMT
	I1005 21:39:09.978813 1518222 round_trippers.go:580]     Audit-Id: 2698d9f9-69f1-43ba-ae3e-8f73b56d9a7e
	I1005 21:39:09.978925 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:10.473535 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6bvj5
	I1005 21:39:10.473559 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.473570 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.473577 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.476350 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.476382 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.476393 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.476400 1518222 round_trippers.go:580]     Audit-Id: 492b94d0-59fd-4f93-b022-4db59ef9d871
	I1005 21:39:10.476406 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.476413 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.476423 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.476430 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.476541 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"409","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1005 21:39:10.477088 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:10.477105 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.477115 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.477123 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.479638 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.479662 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.479671 1518222 round_trippers.go:580]     Audit-Id: 388bc4c2-bb90-40f9-9271-1763553e8d2e
	I1005 21:39:10.479677 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.479683 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.479690 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.479696 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.479703 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.479839 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:10.480236 1518222 pod_ready.go:92] pod "coredns-5dd5756b68-6bvj5" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:10.480257 1518222 pod_ready.go:81] duration metric: took 1.521187535s waiting for pod "coredns-5dd5756b68-6bvj5" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.480270 1518222 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.480340 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-814558
	I1005 21:39:10.480348 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.480356 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.480363 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.482958 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.483016 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.483056 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.483083 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.483105 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.483142 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.483170 1518222 round_trippers.go:580]     Audit-Id: b2c40a5b-5f25-448b-bef4-0788fa29424b
	I1005 21:39:10.483196 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.483303 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-814558","namespace":"kube-system","uid":"f9ec7415-1ccc-4ab0-a62e-855fd2e89920","resourceVersion":"265","creationTimestamp":"2023-10-05T21:38:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"5e488af0d1cc97f30d1e85d9d7859da3","kubernetes.io/config.mirror":"5e488af0d1cc97f30d1e85d9d7859da3","kubernetes.io/config.seen":"2023-10-05T21:38:17.423098181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1005 21:39:10.483770 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:10.483786 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.483794 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.483802 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.486182 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.486206 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.486215 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.486221 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.486228 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.486234 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.486241 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.486252 1518222 round_trippers.go:580]     Audit-Id: e1271faf-7f8a-4ef0-a5fc-49c88ef6b27e
	I1005 21:39:10.486419 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:10.486810 1518222 pod_ready.go:92] pod "etcd-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:10.486830 1518222 pod_ready.go:81] duration metric: took 6.548957ms waiting for pod "etcd-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.486847 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.486915 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-814558
	I1005 21:39:10.486924 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.486932 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.486940 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.489503 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.489569 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.489591 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.489616 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.489649 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.489666 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.489674 1518222 round_trippers.go:580]     Audit-Id: a99acc3b-264e-4fac-8721-f30f56db2df1
	I1005 21:39:10.489681 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.489839 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-814558","namespace":"kube-system","uid":"5d4b6568-b5be-4a73-b543-87354078f3e7","resourceVersion":"270","creationTimestamp":"2023-10-05T21:38:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48dffb1033d3aa2f4aa5ffa4543bf256","kubernetes.io/config.mirror":"48dffb1033d3aa2f4aa5ffa4543bf256","kubernetes.io/config.seen":"2023-10-05T21:38:17.423099773Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1005 21:39:10.490422 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:10.490440 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.490448 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.490455 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.492926 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.492994 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.493017 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.493039 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.493075 1518222 round_trippers.go:580]     Audit-Id: 6030a7b9-a5d9-428d-96b9-a35ca17ed57e
	I1005 21:39:10.493095 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.493103 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.493109 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.493240 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:10.493694 1518222 pod_ready.go:92] pod "kube-apiserver-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:10.493714 1518222 pod_ready.go:81] duration metric: took 6.854154ms waiting for pod "kube-apiserver-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.493726 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.493793 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-814558
	I1005 21:39:10.493802 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.493809 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.493817 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.496860 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:10.496884 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.496893 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.496900 1518222 round_trippers.go:580]     Audit-Id: 3844d420-13ee-445c-8fcc-aae0aec7bfc6
	I1005 21:39:10.496907 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.496913 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.496929 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.496939 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.497491 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-814558","namespace":"kube-system","uid":"e3b6b429-bc4a-460a-9328-17bdb559510d","resourceVersion":"269","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e1f0f9cedaae17855a8cbeaa7f6b78c","kubernetes.io/config.mirror":"0e1f0f9cedaae17855a8cbeaa7f6b78c","kubernetes.io/config.seen":"2023-10-05T21:38:17.423101110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1005 21:39:10.548465 1518222 request.go:629] Waited for 50.238926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:10.548548 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:10.548559 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.548568 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.548575 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.551381 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.551451 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.551474 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.551494 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.551529 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.551557 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.551635 1518222 round_trippers.go:580]     Audit-Id: 1afcc172-996c-407d-8fa3-f8ba70fb1398
	I1005 21:39:10.551661 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.551793 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:10.552200 1518222 pod_ready.go:92] pod "kube-controller-manager-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:10.552219 1518222 pod_ready.go:81] duration metric: took 58.481214ms waiting for pod "kube-controller-manager-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.552232 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lftrk" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.747563 1518222 request.go:629] Waited for 195.266848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lftrk
	I1005 21:39:10.747625 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lftrk
	I1005 21:39:10.747638 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.747649 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.747659 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.750373 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.750392 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.750403 1518222 round_trippers.go:580]     Audit-Id: 5bc4fb74-a762-490c-8ff5-303ab9c50835
	I1005 21:39:10.750409 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.750416 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.750422 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.750428 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.750439 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.750552 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lftrk","generateName":"kube-proxy-","namespace":"kube-system","uid":"00a86d93-f9f8-4616-9b0d-639530776c04","resourceVersion":"360","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"030d1c05-ca2b-42bc-8181-c0109b2fd192","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"030d1c05-ca2b-42bc-8181-c0109b2fd192\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1005 21:39:10.948385 1518222 request.go:629] Waited for 197.346976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:10.948452 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:10.948459 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:10.948467 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:10.948475 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:10.951135 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:10.951154 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:10.951163 1518222 round_trippers.go:580]     Audit-Id: 90f8e1fa-21c1-4a80-a609-e9302e2d7586
	I1005 21:39:10.951170 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:10.951176 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:10.951182 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:10.951188 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:10.951199 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:10 GMT
	I1005 21:39:10.951310 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:10.951702 1518222 pod_ready.go:92] pod "kube-proxy-lftrk" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:10.951723 1518222 pod_ready.go:81] duration metric: took 399.484094ms waiting for pod "kube-proxy-lftrk" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:10.951735 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:11.147933 1518222 request.go:629] Waited for 196.121338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814558
	I1005 21:39:11.148006 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814558
	I1005 21:39:11.148013 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:11.148022 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:11.148030 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:11.150747 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:11.150817 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:11.150840 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:11.150863 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:11.150893 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:11.150902 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:11.150911 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:11 GMT
	I1005 21:39:11.150927 1518222 round_trippers.go:580]     Audit-Id: df3d61ce-0e5a-4b25-b0d1-51ec46890b93
	I1005 21:39:11.151067 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-814558","namespace":"kube-system","uid":"d161dcc2-6d30-4384-826e-ccbbc539edda","resourceVersion":"283","creationTimestamp":"2023-10-05T21:38:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcea6c17a12dd03ffc181c343e33d23a","kubernetes.io/config.mirror":"dcea6c17a12dd03ffc181c343e33d23a","kubernetes.io/config.seen":"2023-10-05T21:38:25.080108120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1005 21:39:11.347873 1518222 request.go:629] Waited for 196.348717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:11.347935 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:11.347945 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:11.347954 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:11.347965 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:11.350724 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:11.350753 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:11.350762 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:11 GMT
	I1005 21:39:11.350777 1518222 round_trippers.go:580]     Audit-Id: 9d3ccca5-0e7a-4a84-bdad-7b1e7e24ee2e
	I1005 21:39:11.350784 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:11.350790 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:11.350800 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:11.350807 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:11.351069 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:11.351489 1518222 pod_ready.go:92] pod "kube-scheduler-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:11.351506 1518222 pod_ready.go:81] duration metric: took 399.75954ms waiting for pod "kube-scheduler-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:11.351519 1518222 pod_ready.go:38] duration metric: took 2.400767027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:39:11.351541 1518222 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:39:11.351604 1518222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:39:11.363443 1518222 command_runner.go:130] > 1279
	I1005 21:39:11.364861 1518222 api_server.go:72] duration metric: took 33.117773083s to wait for apiserver process to appear ...
	I1005 21:39:11.364889 1518222 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:39:11.364909 1518222 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1005 21:39:11.374051 1518222 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1005 21:39:11.374119 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1005 21:39:11.374142 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:11.374155 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:11.374166 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:11.375324 1518222 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1005 21:39:11.375352 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:11.375360 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:11.375367 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:11.375377 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:11.375387 1518222 round_trippers.go:580]     Content-Length: 263
	I1005 21:39:11.375393 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:11 GMT
	I1005 21:39:11.375399 1518222 round_trippers.go:580]     Audit-Id: 2e33aaeb-15b7-4c0b-b423-010435d7235c
	I1005 21:39:11.375405 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:11.375426 1518222 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1005 21:39:11.375514 1518222 api_server.go:141] control plane version: v1.28.2
	I1005 21:39:11.375532 1518222 api_server.go:131] duration metric: took 10.634865ms to wait for apiserver health ...
	I1005 21:39:11.375542 1518222 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:39:11.547986 1518222 request.go:629] Waited for 172.354547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:39:11.548045 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:39:11.548052 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:11.548061 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:11.548069 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:11.551722 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:11.551747 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:11.551756 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:11 GMT
	I1005 21:39:11.551763 1518222 round_trippers.go:580]     Audit-Id: c4e7284f-25f9-42c4-aed4-836716eb9f38
	I1005 21:39:11.551769 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:11.551775 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:11.551789 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:11.551795 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:11.552233 1518222 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"409","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1005 21:39:11.554636 1518222 system_pods.go:59] 8 kube-system pods found
	I1005 21:39:11.554669 1518222 system_pods.go:61] "coredns-5dd5756b68-6bvj5" [c0961e1d-4075-4c8e-94d9-9c34564f71df] Running
	I1005 21:39:11.554677 1518222 system_pods.go:61] "etcd-multinode-814558" [f9ec7415-1ccc-4ab0-a62e-855fd2e89920] Running
	I1005 21:39:11.554682 1518222 system_pods.go:61] "kindnet-q47f5" [4022c47f-9cbd-4500-a2aa-92e0caaedf99] Running
	I1005 21:39:11.554687 1518222 system_pods.go:61] "kube-apiserver-multinode-814558" [5d4b6568-b5be-4a73-b543-87354078f3e7] Running
	I1005 21:39:11.554699 1518222 system_pods.go:61] "kube-controller-manager-multinode-814558" [e3b6b429-bc4a-460a-9328-17bdb559510d] Running
	I1005 21:39:11.554712 1518222 system_pods.go:61] "kube-proxy-lftrk" [00a86d93-f9f8-4616-9b0d-639530776c04] Running
	I1005 21:39:11.554717 1518222 system_pods.go:61] "kube-scheduler-multinode-814558" [d161dcc2-6d30-4384-826e-ccbbc539edda] Running
	I1005 21:39:11.554722 1518222 system_pods.go:61] "storage-provisioner" [ddcc6c9f-5045-4c99-9808-25700d745ce0] Running
	I1005 21:39:11.554729 1518222 system_pods.go:74] duration metric: took 179.178146ms to wait for pod list to return data ...
	I1005 21:39:11.554739 1518222 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:39:11.748147 1518222 request.go:629] Waited for 193.334406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1005 21:39:11.748208 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1005 21:39:11.748220 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:11.748231 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:11.748240 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:11.750993 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:11.751017 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:11.751026 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:11.751033 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:11.751039 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:11.751046 1518222 round_trippers.go:580]     Content-Length: 261
	I1005 21:39:11.751052 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:11 GMT
	I1005 21:39:11.751059 1518222 round_trippers.go:580]     Audit-Id: a4d3c223-1acf-4e31-8a01-d6482af7a50a
	I1005 21:39:11.751069 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:11.751091 1518222 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0cf97ff9-e249-41ed-9a85-55cf43d321a8","resourceVersion":"330","creationTimestamp":"2023-10-05T21:38:37Z"}}]}
	I1005 21:39:11.751288 1518222 default_sa.go:45] found service account: "default"
	I1005 21:39:11.751306 1518222 default_sa.go:55] duration metric: took 196.560408ms for default service account to be created ...
	I1005 21:39:11.751316 1518222 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:39:11.947621 1518222 request.go:629] Waited for 196.241353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:39:11.947738 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:39:11.947752 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:11.947762 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:11.947769 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:11.951195 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:11.951219 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:11.951229 1518222 round_trippers.go:580]     Audit-Id: 6b0a49db-c280-486d-a426-e1859f438624
	I1005 21:39:11.951236 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:11.951242 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:11.951248 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:11.951255 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:11.951272 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:11 GMT
	I1005 21:39:11.951981 1518222 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"409","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1005 21:39:11.954378 1518222 system_pods.go:86] 8 kube-system pods found
	I1005 21:39:11.954408 1518222 system_pods.go:89] "coredns-5dd5756b68-6bvj5" [c0961e1d-4075-4c8e-94d9-9c34564f71df] Running
	I1005 21:39:11.954416 1518222 system_pods.go:89] "etcd-multinode-814558" [f9ec7415-1ccc-4ab0-a62e-855fd2e89920] Running
	I1005 21:39:11.954421 1518222 system_pods.go:89] "kindnet-q47f5" [4022c47f-9cbd-4500-a2aa-92e0caaedf99] Running
	I1005 21:39:11.954433 1518222 system_pods.go:89] "kube-apiserver-multinode-814558" [5d4b6568-b5be-4a73-b543-87354078f3e7] Running
	I1005 21:39:11.954447 1518222 system_pods.go:89] "kube-controller-manager-multinode-814558" [e3b6b429-bc4a-460a-9328-17bdb559510d] Running
	I1005 21:39:11.954452 1518222 system_pods.go:89] "kube-proxy-lftrk" [00a86d93-f9f8-4616-9b0d-639530776c04] Running
	I1005 21:39:11.954457 1518222 system_pods.go:89] "kube-scheduler-multinode-814558" [d161dcc2-6d30-4384-826e-ccbbc539edda] Running
	I1005 21:39:11.954462 1518222 system_pods.go:89] "storage-provisioner" [ddcc6c9f-5045-4c99-9808-25700d745ce0] Running
	I1005 21:39:11.954471 1518222 system_pods.go:126] duration metric: took 203.149415ms to wait for k8s-apps to be running ...
	I1005 21:39:11.954482 1518222 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:39:11.954545 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:39:11.968724 1518222 system_svc.go:56] duration metric: took 14.232969ms WaitForService to wait for kubelet.
	I1005 21:39:11.968753 1518222 kubeadm.go:581] duration metric: took 33.721670204s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:39:11.968774 1518222 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:39:12.148155 1518222 request.go:629] Waited for 179.306499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1005 21:39:12.148217 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1005 21:39:12.148228 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:12.148237 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:12.148253 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:12.150925 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:12.150947 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:12.150955 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:12.150962 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:12 GMT
	I1005 21:39:12.150969 1518222 round_trippers.go:580]     Audit-Id: 281657a4-e001-446f-86bb-c5de31f1fbae
	I1005 21:39:12.150981 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:12.150987 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:12.150994 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:12.151181 1518222 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1005 21:39:12.151667 1518222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:39:12.151693 1518222 node_conditions.go:123] node cpu capacity is 2
	I1005 21:39:12.151704 1518222 node_conditions.go:105] duration metric: took 182.925166ms to run NodePressure ...
	I1005 21:39:12.151733 1518222 start.go:228] waiting for startup goroutines ...
	I1005 21:39:12.151747 1518222 start.go:233] waiting for cluster config update ...
	I1005 21:39:12.151758 1518222 start.go:242] writing updated cluster config ...
	I1005 21:39:12.154465 1518222 out.go:177] 
	I1005 21:39:12.156191 1518222 config.go:182] Loaded profile config "multinode-814558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:39:12.156285 1518222 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/config.json ...
	I1005 21:39:12.158299 1518222 out.go:177] * Starting worker node multinode-814558-m02 in cluster multinode-814558
	I1005 21:39:12.160060 1518222 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:39:12.161674 1518222 out.go:177] * Pulling base image ...
	I1005 21:39:12.164104 1518222 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:39:12.164165 1518222 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:39:12.164422 1518222 cache.go:57] Caching tarball of preloaded images
	I1005 21:39:12.164547 1518222 preload.go:174] Found /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1005 21:39:12.164577 1518222 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 21:39:12.164717 1518222 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/config.json ...
	I1005 21:39:12.187958 1518222 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 21:39:12.187980 1518222 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 21:39:12.188054 1518222 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:39:12.188120 1518222 start.go:365] acquiring machines lock for multinode-814558-m02: {Name:mka081b51b7c4396c8a9b9cffd22bb366e8f8a5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:39:12.188388 1518222 start.go:369] acquired machines lock for "multinode-814558-m02" in 243.635µs
	I1005 21:39:12.188423 1518222 start.go:93] Provisioning new machine with config: &{Name:multinode-814558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-814558 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1005 21:39:12.188595 1518222 start.go:125] createHost starting for "m02" (driver="docker")
	I1005 21:39:12.191455 1518222 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1005 21:39:12.191584 1518222 start.go:159] libmachine.API.Create for "multinode-814558" (driver="docker")
	I1005 21:39:12.191617 1518222 client.go:168] LocalClient.Create starting
	I1005 21:39:12.191703 1518222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem
	I1005 21:39:12.191740 1518222 main.go:141] libmachine: Decoding PEM data...
	I1005 21:39:12.191762 1518222 main.go:141] libmachine: Parsing certificate...
	I1005 21:39:12.191819 1518222 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem
	I1005 21:39:12.191842 1518222 main.go:141] libmachine: Decoding PEM data...
	I1005 21:39:12.191857 1518222 main.go:141] libmachine: Parsing certificate...
	I1005 21:39:12.192108 1518222 cli_runner.go:164] Run: docker network inspect multinode-814558 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:39:12.210182 1518222 network_create.go:77] Found existing network {name:multinode-814558 subnet:0x4000dfb950 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1005 21:39:12.210235 1518222 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-814558-m02" container
	I1005 21:39:12.210311 1518222 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:39:12.230221 1518222 cli_runner.go:164] Run: docker volume create multinode-814558-m02 --label name.minikube.sigs.k8s.io=multinode-814558-m02 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:39:12.251119 1518222 oci.go:103] Successfully created a docker volume multinode-814558-m02
	I1005 21:39:12.251204 1518222 cli_runner.go:164] Run: docker run --rm --name multinode-814558-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814558-m02 --entrypoint /usr/bin/test -v multinode-814558-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:39:12.845653 1518222 oci.go:107] Successfully prepared a docker volume multinode-814558-m02
	I1005 21:39:12.845712 1518222 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:39:12.845735 1518222 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:39:12.845818 1518222 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-814558-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:39:17.226580 1518222 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-814558-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.380717648s)
	I1005 21:39:17.226615 1518222 kic.go:199] duration metric: took 4.380876 seconds to extract preloaded images to volume
	W1005 21:39:17.226751 1518222 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:39:17.226862 1518222 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:39:17.299021 1518222 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-814558-m02 --name multinode-814558-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814558-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-814558-m02 --network multinode-814558 --ip 192.168.58.3 --volume multinode-814558-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:39:17.662528 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558-m02 --format={{.State.Running}}
	I1005 21:39:17.687132 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558-m02 --format={{.State.Status}}
	I1005 21:39:17.737737 1518222 cli_runner.go:164] Run: docker exec multinode-814558-m02 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:39:17.851012 1518222 oci.go:144] the created container "multinode-814558-m02" has a running status.
	I1005 21:39:17.851039 1518222 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa...
	I1005 21:39:18.431298 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 21:39:18.431350 1518222 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:39:18.470199 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558-m02 --format={{.State.Status}}
	I1005 21:39:18.512219 1518222 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:39:18.512244 1518222 kic_runner.go:114] Args: [docker exec --privileged multinode-814558-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:39:18.633852 1518222 cli_runner.go:164] Run: docker container inspect multinode-814558-m02 --format={{.State.Status}}
	I1005 21:39:18.662635 1518222 machine.go:88] provisioning docker machine ...
	I1005 21:39:18.662670 1518222 ubuntu.go:169] provisioning hostname "multinode-814558-m02"
	I1005 21:39:18.662738 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:18.696222 1518222 main.go:141] libmachine: Using SSH client type: native
	I1005 21:39:18.696646 1518222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1005 21:39:18.696666 1518222 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-814558-m02 && echo "multinode-814558-m02" | sudo tee /etc/hostname
	I1005 21:39:18.887717 1518222 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-814558-m02
	
	I1005 21:39:18.887802 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:18.914685 1518222 main.go:141] libmachine: Using SSH client type: native
	I1005 21:39:18.915073 1518222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1005 21:39:18.915091 1518222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-814558-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-814558-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-814558-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:39:19.055556 1518222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:39:19.055634 1518222 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:39:19.055675 1518222 ubuntu.go:177] setting up certificates
	I1005 21:39:19.055726 1518222 provision.go:83] configureAuth start
	I1005 21:39:19.055829 1518222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558-m02
	I1005 21:39:19.077878 1518222 provision.go:138] copyHostCerts
	I1005 21:39:19.077920 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:39:19.077950 1518222 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 21:39:19.077964 1518222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:39:19.078037 1518222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:39:19.078121 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:39:19.078144 1518222 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 21:39:19.078153 1518222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:39:19.078181 1518222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:39:19.078226 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:39:19.078247 1518222 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 21:39:19.078258 1518222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:39:19.078284 1518222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:39:19.078330 1518222 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.multinode-814558-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-814558-m02]
	I1005 21:39:19.625475 1518222 provision.go:172] copyRemoteCerts
	I1005 21:39:19.625590 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:39:19.625662 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:19.645383 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa Username:docker}
	I1005 21:39:19.745933 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 21:39:19.746000 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1005 21:39:19.775622 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 21:39:19.775684 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 21:39:19.805496 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 21:39:19.805569 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:39:19.835568 1518222 provision.go:86] duration metric: configureAuth took 779.81153ms
	I1005 21:39:19.835594 1518222 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:39:19.835812 1518222 config.go:182] Loaded profile config "multinode-814558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:39:19.835933 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:19.855859 1518222 main.go:141] libmachine: Using SSH client type: native
	I1005 21:39:19.856284 1518222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1005 21:39:19.856301 1518222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:39:20.128221 1518222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:39:20.128250 1518222 machine.go:91] provisioned docker machine in 1.46559193s
	I1005 21:39:20.128261 1518222 client.go:171] LocalClient.Create took 7.936633428s
	I1005 21:39:20.128275 1518222 start.go:167] duration metric: libmachine.API.Create for "multinode-814558" took 7.936692899s
	I1005 21:39:20.128282 1518222 start.go:300] post-start starting for "multinode-814558-m02" (driver="docker")
	I1005 21:39:20.128295 1518222 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:39:20.128364 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:39:20.128410 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:20.147574 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa Username:docker}
	I1005 21:39:20.252717 1518222 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:39:20.256823 1518222 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1005 21:39:20.256860 1518222 command_runner.go:130] > NAME="Ubuntu"
	I1005 21:39:20.256873 1518222 command_runner.go:130] > VERSION_ID="22.04"
	I1005 21:39:20.256879 1518222 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1005 21:39:20.256885 1518222 command_runner.go:130] > VERSION_CODENAME=jammy
	I1005 21:39:20.256890 1518222 command_runner.go:130] > ID=ubuntu
	I1005 21:39:20.256895 1518222 command_runner.go:130] > ID_LIKE=debian
	I1005 21:39:20.256900 1518222 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1005 21:39:20.256909 1518222 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1005 21:39:20.256919 1518222 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1005 21:39:20.256932 1518222 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1005 21:39:20.256938 1518222 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1005 21:39:20.257204 1518222 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:39:20.257237 1518222 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:39:20.257249 1518222 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:39:20.257257 1518222 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:39:20.257269 1518222 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:39:20.257329 1518222 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:39:20.257432 1518222 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 21:39:20.257440 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> /etc/ssl/certs/14537862.pem
	I1005 21:39:20.257537 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:39:20.267945 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:39:20.297058 1518222 start.go:303] post-start completed in 168.757346ms
	I1005 21:39:20.297525 1518222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558-m02
	I1005 21:39:20.315315 1518222 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/config.json ...
	I1005 21:39:20.315595 1518222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:39:20.315638 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:20.334428 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa Username:docker}
	I1005 21:39:20.431539 1518222 command_runner.go:130] > 17%!
	(MISSING)I1005 21:39:20.431615 1518222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:39:20.437762 1518222 command_runner.go:130] > 162G
	I1005 21:39:20.437789 1518222 start.go:128] duration metric: createHost completed in 8.249184386s
	I1005 21:39:20.437799 1518222 start.go:83] releasing machines lock for "multinode-814558-m02", held for 8.249397054s
	I1005 21:39:20.437881 1518222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558-m02
	I1005 21:39:20.459299 1518222 out.go:177] * Found network options:
	I1005 21:39:20.461083 1518222 out.go:177]   - NO_PROXY=192.168.58.2
	W1005 21:39:20.462580 1518222 proxy.go:119] fail to check proxy env: Error ip not in block
	W1005 21:39:20.462619 1518222 proxy.go:119] fail to check proxy env: Error ip not in block
	I1005 21:39:20.462689 1518222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 21:39:20.462742 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:20.463008 1518222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:39:20.463065 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:39:20.487214 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa Username:docker}
	I1005 21:39:20.498433 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa Username:docker}
	I1005 21:39:20.752466 1518222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:39:20.752570 1518222 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1005 21:39:20.758345 1518222 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1005 21:39:20.758371 1518222 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1005 21:39:20.758402 1518222 command_runner.go:130] > Device: b3h/179d	Inode: 5449409     Links: 1
	I1005 21:39:20.758412 1518222 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 21:39:20.758420 1518222 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1005 21:39:20.758433 1518222 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1005 21:39:20.758439 1518222 command_runner.go:130] > Change: 2023-10-05 21:15:15.895759670 +0000
	I1005 21:39:20.758450 1518222 command_runner.go:130] >  Birth: 2023-10-05 21:15:15.895759670 +0000
	I1005 21:39:20.758748 1518222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:39:20.782467 1518222 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:39:20.782609 1518222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:39:20.824479 1518222 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1005 21:39:20.824517 1518222 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1005 21:39:20.824526 1518222 start.go:469] detecting cgroup driver to use...
	I1005 21:39:20.824556 1518222 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:39:20.824612 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 21:39:20.844007 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 21:39:20.857733 1518222 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:39:20.857798 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:39:20.873956 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:39:20.891663 1518222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:39:20.989105 1518222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:39:21.096417 1518222 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1005 21:39:21.096467 1518222 docker.go:213] disabling docker service ...
	I1005 21:39:21.096531 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:39:21.124389 1518222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:39:21.141930 1518222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:39:21.261594 1518222 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1005 21:39:21.261920 1518222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:39:21.276543 1518222 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1005 21:39:21.360618 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:39:21.375682 1518222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:39:21.397894 1518222 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1005 21:39:21.400066 1518222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1005 21:39:21.400181 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:39:21.412967 1518222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 21:39:21.413052 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:39:21.426115 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:39:21.439471 1518222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:39:21.453536 1518222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:39:21.465228 1518222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:39:21.474810 1518222 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1005 21:39:21.476359 1518222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:39:21.487367 1518222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:39:21.581946 1518222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 21:39:21.697618 1518222 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 21:39:21.697760 1518222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 21:39:21.702554 1518222 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1005 21:39:21.702625 1518222 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1005 21:39:21.702649 1518222 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1005 21:39:21.702689 1518222 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 21:39:21.702712 1518222 command_runner.go:130] > Access: 2023-10-05 21:39:21.683884074 +0000
	I1005 21:39:21.702735 1518222 command_runner.go:130] > Modify: 2023-10-05 21:39:21.683884074 +0000
	I1005 21:39:21.702771 1518222 command_runner.go:130] > Change: 2023-10-05 21:39:21.683884074 +0000
	I1005 21:39:21.702794 1518222 command_runner.go:130] >  Birth: -
	I1005 21:39:21.702831 1518222 start.go:537] Will wait 60s for crictl version
	I1005 21:39:21.702908 1518222 ssh_runner.go:195] Run: which crictl
	I1005 21:39:21.707193 1518222 command_runner.go:130] > /usr/bin/crictl
	I1005 21:39:21.707465 1518222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:39:21.750394 1518222 command_runner.go:130] > Version:  0.1.0
	I1005 21:39:21.750417 1518222 command_runner.go:130] > RuntimeName:  cri-o
	I1005 21:39:21.750424 1518222 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1005 21:39:21.750431 1518222 command_runner.go:130] > RuntimeApiVersion:  v1
	I1005 21:39:21.753478 1518222 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 21:39:21.753562 1518222 ssh_runner.go:195] Run: crio --version
	I1005 21:39:21.798244 1518222 command_runner.go:130] > crio version 1.24.6
	I1005 21:39:21.798268 1518222 command_runner.go:130] > Version:          1.24.6
	I1005 21:39:21.798278 1518222 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 21:39:21.798284 1518222 command_runner.go:130] > GitTreeState:     clean
	I1005 21:39:21.798293 1518222 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 21:39:21.798299 1518222 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 21:39:21.798319 1518222 command_runner.go:130] > Compiler:         gc
	I1005 21:39:21.798325 1518222 command_runner.go:130] > Platform:         linux/arm64
	I1005 21:39:21.798331 1518222 command_runner.go:130] > Linkmode:         dynamic
	I1005 21:39:21.798341 1518222 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 21:39:21.798346 1518222 command_runner.go:130] > SeccompEnabled:   true
	I1005 21:39:21.798352 1518222 command_runner.go:130] > AppArmorEnabled:  false
	I1005 21:39:21.800446 1518222 ssh_runner.go:195] Run: crio --version
	I1005 21:39:21.846839 1518222 command_runner.go:130] > crio version 1.24.6
	I1005 21:39:21.846858 1518222 command_runner.go:130] > Version:          1.24.6
	I1005 21:39:21.846870 1518222 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1005 21:39:21.846876 1518222 command_runner.go:130] > GitTreeState:     clean
	I1005 21:39:21.846883 1518222 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1005 21:39:21.846889 1518222 command_runner.go:130] > GoVersion:        go1.18.2
	I1005 21:39:21.846894 1518222 command_runner.go:130] > Compiler:         gc
	I1005 21:39:21.846900 1518222 command_runner.go:130] > Platform:         linux/arm64
	I1005 21:39:21.846911 1518222 command_runner.go:130] > Linkmode:         dynamic
	I1005 21:39:21.846922 1518222 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1005 21:39:21.846927 1518222 command_runner.go:130] > SeccompEnabled:   true
	I1005 21:39:21.846935 1518222 command_runner.go:130] > AppArmorEnabled:  false
	I1005 21:39:21.851929 1518222 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1005 21:39:21.853541 1518222 out.go:177]   - env NO_PROXY=192.168.58.2
	I1005 21:39:21.855118 1518222 cli_runner.go:164] Run: docker network inspect multinode-814558 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:39:21.876231 1518222 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1005 21:39:21.881457 1518222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:39:21.895333 1518222 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558 for IP: 192.168.58.3
	I1005 21:39:21.895365 1518222 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac5d4c0ae883432caac512ac8160283213d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:39:21.895507 1518222 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key
	I1005 21:39:21.895549 1518222 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key
	I1005 21:39:21.895562 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1005 21:39:21.895578 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1005 21:39:21.895589 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1005 21:39:21.895600 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1005 21:39:21.895650 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem (1338 bytes)
	W1005 21:39:21.895678 1518222 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786_empty.pem, impossibly tiny 0 bytes
	I1005 21:39:21.895691 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:39:21.895722 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:39:21.895745 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:39:21.895770 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem (1675 bytes)
	I1005 21:39:21.895814 1518222 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:39:21.895841 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem -> /usr/share/ca-certificates/1453786.pem
	I1005 21:39:21.895853 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> /usr/share/ca-certificates/14537862.pem
	I1005 21:39:21.895864 1518222 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:39:21.896214 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:39:21.927181 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 21:39:21.956546 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:39:21.986135 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:39:22.017742 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem --> /usr/share/ca-certificates/1453786.pem (1338 bytes)
	I1005 21:39:22.049033 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /usr/share/ca-certificates/14537862.pem (1708 bytes)
	I1005 21:39:22.079939 1518222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:39:22.111354 1518222 ssh_runner.go:195] Run: openssl version
	I1005 21:39:22.118761 1518222 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1005 21:39:22.119144 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1453786.pem && ln -fs /usr/share/ca-certificates/1453786.pem /etc/ssl/certs/1453786.pem"
	I1005 21:39:22.131392 1518222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1453786.pem
	I1005 21:39:22.136494 1518222 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  5 21:22 /usr/share/ca-certificates/1453786.pem
	I1005 21:39:22.136526 1518222 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 21:22 /usr/share/ca-certificates/1453786.pem
	I1005 21:39:22.136579 1518222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1453786.pem
	I1005 21:39:22.145229 1518222 command_runner.go:130] > 51391683
	I1005 21:39:22.145359 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1453786.pem /etc/ssl/certs/51391683.0"
	I1005 21:39:22.162738 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14537862.pem && ln -fs /usr/share/ca-certificates/14537862.pem /etc/ssl/certs/14537862.pem"
	I1005 21:39:22.174645 1518222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14537862.pem
	I1005 21:39:22.179202 1518222 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  5 21:22 /usr/share/ca-certificates/14537862.pem
	I1005 21:39:22.179826 1518222 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 21:22 /usr/share/ca-certificates/14537862.pem
	I1005 21:39:22.179919 1518222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14537862.pem
	I1005 21:39:22.188465 1518222 command_runner.go:130] > 3ec20f2e
	I1005 21:39:22.189365 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14537862.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 21:39:22.201288 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:39:22.213262 1518222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:39:22.218006 1518222 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  5 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:39:22.218036 1518222 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:39:22.218092 1518222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:39:22.226414 1518222 command_runner.go:130] > b5213941
	I1005 21:39:22.226843 1518222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:39:22.238618 1518222 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:39:22.243202 1518222 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:39:22.243290 1518222 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1005 21:39:22.243402 1518222 ssh_runner.go:195] Run: crio config
	I1005 21:39:22.292972 1518222 command_runner.go:130] ! time="2023-10-05 21:39:22.292634313Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1005 21:39:22.293213 1518222 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1005 21:39:22.313854 1518222 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1005 21:39:22.313881 1518222 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1005 21:39:22.313890 1518222 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1005 21:39:22.313894 1518222 command_runner.go:130] > #
	I1005 21:39:22.313902 1518222 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1005 21:39:22.313918 1518222 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1005 21:39:22.313928 1518222 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1005 21:39:22.313937 1518222 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1005 21:39:22.313946 1518222 command_runner.go:130] > # reload'.
	I1005 21:39:22.313954 1518222 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1005 21:39:22.313963 1518222 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1005 21:39:22.313975 1518222 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1005 21:39:22.313983 1518222 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1005 21:39:22.313996 1518222 command_runner.go:130] > [crio]
	I1005 21:39:22.314004 1518222 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1005 21:39:22.314014 1518222 command_runner.go:130] > # containers images, in this directory.
	I1005 21:39:22.314022 1518222 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1005 21:39:22.314033 1518222 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1005 21:39:22.314043 1518222 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1005 21:39:22.314051 1518222 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1005 21:39:22.314058 1518222 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1005 21:39:22.314076 1518222 command_runner.go:130] > # storage_driver = "vfs"
	I1005 21:39:22.314083 1518222 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1005 21:39:22.314093 1518222 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1005 21:39:22.314103 1518222 command_runner.go:130] > # storage_option = [
	I1005 21:39:22.314107 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.314115 1518222 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1005 21:39:22.314128 1518222 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1005 21:39:22.314133 1518222 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1005 21:39:22.314141 1518222 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1005 21:39:22.314151 1518222 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1005 21:39:22.314164 1518222 command_runner.go:130] > # always happen on a node reboot
	I1005 21:39:22.314174 1518222 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1005 21:39:22.314185 1518222 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1005 21:39:22.314192 1518222 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1005 21:39:22.314208 1518222 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1005 21:39:22.314220 1518222 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1005 21:39:22.314229 1518222 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1005 21:39:22.314242 1518222 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1005 21:39:22.314250 1518222 command_runner.go:130] > # internal_wipe = true
	I1005 21:39:22.314256 1518222 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1005 21:39:22.314265 1518222 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1005 21:39:22.314274 1518222 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1005 21:39:22.314283 1518222 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1005 21:39:22.314294 1518222 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1005 21:39:22.314300 1518222 command_runner.go:130] > [crio.api]
	I1005 21:39:22.314307 1518222 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1005 21:39:22.314317 1518222 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1005 21:39:22.314323 1518222 command_runner.go:130] > # IP address on which the stream server will listen.
	I1005 21:39:22.314329 1518222 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1005 21:39:22.314344 1518222 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1005 21:39:22.314352 1518222 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1005 21:39:22.314357 1518222 command_runner.go:130] > # stream_port = "0"
	I1005 21:39:22.314364 1518222 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1005 21:39:22.314369 1518222 command_runner.go:130] > # stream_enable_tls = false
	I1005 21:39:22.314377 1518222 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1005 21:39:22.314386 1518222 command_runner.go:130] > # stream_idle_timeout = ""
	I1005 21:39:22.314394 1518222 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1005 21:39:22.314402 1518222 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1005 21:39:22.314413 1518222 command_runner.go:130] > # minutes.
	I1005 21:39:22.314419 1518222 command_runner.go:130] > # stream_tls_cert = ""
	I1005 21:39:22.314432 1518222 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1005 21:39:22.314439 1518222 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1005 21:39:22.314448 1518222 command_runner.go:130] > # stream_tls_key = ""
	I1005 21:39:22.314455 1518222 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1005 21:39:22.314463 1518222 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1005 21:39:22.314469 1518222 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1005 21:39:22.314478 1518222 command_runner.go:130] > # stream_tls_ca = ""
	I1005 21:39:22.314488 1518222 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 21:39:22.314497 1518222 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1005 21:39:22.314506 1518222 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1005 21:39:22.314520 1518222 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1005 21:39:22.314551 1518222 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1005 21:39:22.314562 1518222 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1005 21:39:22.314567 1518222 command_runner.go:130] > [crio.runtime]
	I1005 21:39:22.314576 1518222 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1005 21:39:22.314588 1518222 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1005 21:39:22.314595 1518222 command_runner.go:130] > # "nofile=1024:2048"
	I1005 21:39:22.314607 1518222 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1005 21:39:22.314612 1518222 command_runner.go:130] > # default_ulimits = [
	I1005 21:39:22.314616 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.314624 1518222 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1005 21:39:22.314631 1518222 command_runner.go:130] > # no_pivot = false
	I1005 21:39:22.314638 1518222 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1005 21:39:22.314651 1518222 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1005 21:39:22.314661 1518222 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1005 21:39:22.314670 1518222 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1005 21:39:22.314678 1518222 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1005 21:39:22.314690 1518222 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 21:39:22.314696 1518222 command_runner.go:130] > # conmon = ""
	I1005 21:39:22.314701 1518222 command_runner.go:130] > # Cgroup setting for conmon
	I1005 21:39:22.314712 1518222 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1005 21:39:22.314720 1518222 command_runner.go:130] > conmon_cgroup = "pod"
	I1005 21:39:22.314728 1518222 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1005 21:39:22.314734 1518222 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1005 21:39:22.314745 1518222 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1005 21:39:22.314753 1518222 command_runner.go:130] > # conmon_env = [
	I1005 21:39:22.314758 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.314769 1518222 command_runner.go:130] > # Additional environment variables to set for all the
	I1005 21:39:22.314781 1518222 command_runner.go:130] > # containers. These are overridden if set in the
	I1005 21:39:22.314789 1518222 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1005 21:39:22.314794 1518222 command_runner.go:130] > # default_env = [
	I1005 21:39:22.314800 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.314807 1518222 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1005 21:39:22.314814 1518222 command_runner.go:130] > # selinux = false
	I1005 21:39:22.314822 1518222 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1005 21:39:22.314830 1518222 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1005 21:39:22.314837 1518222 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1005 21:39:22.314846 1518222 command_runner.go:130] > # seccomp_profile = ""
	I1005 21:39:22.314853 1518222 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1005 21:39:22.314861 1518222 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1005 21:39:22.314872 1518222 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1005 21:39:22.314878 1518222 command_runner.go:130] > # which might increase security.
	I1005 21:39:22.314894 1518222 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1005 21:39:22.314902 1518222 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1005 21:39:22.314914 1518222 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1005 21:39:22.314921 1518222 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1005 21:39:22.314929 1518222 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1005 21:39:22.314938 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:39:22.314947 1518222 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1005 21:39:22.314955 1518222 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1005 21:39:22.314964 1518222 command_runner.go:130] > # the cgroup blockio controller.
	I1005 21:39:22.314970 1518222 command_runner.go:130] > # blockio_config_file = ""
	I1005 21:39:22.314978 1518222 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1005 21:39:22.314986 1518222 command_runner.go:130] > # irqbalance daemon.
	I1005 21:39:22.314993 1518222 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1005 21:39:22.315009 1518222 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1005 21:39:22.315017 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:39:22.315025 1518222 command_runner.go:130] > # rdt_config_file = ""
	I1005 21:39:22.315032 1518222 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1005 21:39:22.315040 1518222 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1005 21:39:22.315051 1518222 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1005 21:39:22.315060 1518222 command_runner.go:130] > # separate_pull_cgroup = ""
	I1005 21:39:22.315068 1518222 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1005 21:39:22.315079 1518222 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1005 21:39:22.315084 1518222 command_runner.go:130] > # will be added.
	I1005 21:39:22.315089 1518222 command_runner.go:130] > # default_capabilities = [
	I1005 21:39:22.315099 1518222 command_runner.go:130] > # 	"CHOWN",
	I1005 21:39:22.315105 1518222 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1005 21:39:22.315109 1518222 command_runner.go:130] > # 	"FSETID",
	I1005 21:39:22.315116 1518222 command_runner.go:130] > # 	"FOWNER",
	I1005 21:39:22.315125 1518222 command_runner.go:130] > # 	"SETGID",
	I1005 21:39:22.315131 1518222 command_runner.go:130] > # 	"SETUID",
	I1005 21:39:22.315136 1518222 command_runner.go:130] > # 	"SETPCAP",
	I1005 21:39:22.315141 1518222 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1005 21:39:22.315148 1518222 command_runner.go:130] > # 	"KILL",
	I1005 21:39:22.315152 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.315165 1518222 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1005 21:39:22.315173 1518222 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1005 21:39:22.315184 1518222 command_runner.go:130] > # add_inheritable_capabilities = true
	I1005 21:39:22.315192 1518222 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1005 21:39:22.315200 1518222 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 21:39:22.315208 1518222 command_runner.go:130] > # default_sysctls = [
	I1005 21:39:22.315213 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.315220 1518222 command_runner.go:130] > # List of devices on the host that a
	I1005 21:39:22.315231 1518222 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1005 21:39:22.315237 1518222 command_runner.go:130] > # allowed_devices = [
	I1005 21:39:22.315249 1518222 command_runner.go:130] > # 	"/dev/fuse",
	I1005 21:39:22.315253 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.315259 1518222 command_runner.go:130] > # List of additional devices. specified as
	I1005 21:39:22.315302 1518222 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1005 21:39:22.315313 1518222 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1005 21:39:22.315321 1518222 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1005 21:39:22.315326 1518222 command_runner.go:130] > # additional_devices = [
	I1005 21:39:22.315332 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.315339 1518222 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1005 21:39:22.315347 1518222 command_runner.go:130] > # cdi_spec_dirs = [
	I1005 21:39:22.315355 1518222 command_runner.go:130] > # 	"/etc/cdi",
	I1005 21:39:22.315360 1518222 command_runner.go:130] > # 	"/var/run/cdi",
	I1005 21:39:22.315370 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.315377 1518222 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1005 21:39:22.315385 1518222 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1005 21:39:22.315392 1518222 command_runner.go:130] > # Defaults to false.
	I1005 21:39:22.315399 1518222 command_runner.go:130] > # device_ownership_from_security_context = false
	I1005 21:39:22.315410 1518222 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1005 21:39:22.315418 1518222 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1005 21:39:22.315426 1518222 command_runner.go:130] > # hooks_dir = [
	I1005 21:39:22.315432 1518222 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1005 21:39:22.315436 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.315449 1518222 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1005 21:39:22.315458 1518222 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1005 21:39:22.315468 1518222 command_runner.go:130] > # its default mounts from the following two files:
	I1005 21:39:22.315472 1518222 command_runner.go:130] > #
	I1005 21:39:22.315479 1518222 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1005 21:39:22.315487 1518222 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1005 21:39:22.315500 1518222 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1005 21:39:22.315505 1518222 command_runner.go:130] > #
	I1005 21:39:22.315513 1518222 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1005 21:39:22.315524 1518222 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1005 21:39:22.315532 1518222 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1005 21:39:22.315542 1518222 command_runner.go:130] > #      only add mounts it finds in this file.
	I1005 21:39:22.315546 1518222 command_runner.go:130] > #
	I1005 21:39:22.315551 1518222 command_runner.go:130] > # default_mounts_file = ""
	I1005 21:39:22.315558 1518222 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1005 21:39:22.315566 1518222 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1005 21:39:22.315575 1518222 command_runner.go:130] > # pids_limit = 0
	I1005 21:39:22.315583 1518222 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1005 21:39:22.315595 1518222 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1005 21:39:22.315603 1518222 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1005 21:39:22.315618 1518222 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1005 21:39:22.315624 1518222 command_runner.go:130] > # log_size_max = -1
	I1005 21:39:22.315636 1518222 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1005 21:39:22.315641 1518222 command_runner.go:130] > # log_to_journald = false
	I1005 21:39:22.315652 1518222 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1005 21:39:22.315659 1518222 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1005 21:39:22.315669 1518222 command_runner.go:130] > # Path to directory for container attach sockets.
	I1005 21:39:22.315676 1518222 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1005 21:39:22.315686 1518222 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1005 21:39:22.315691 1518222 command_runner.go:130] > # bind_mount_prefix = ""
	I1005 21:39:22.315698 1518222 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1005 21:39:22.315707 1518222 command_runner.go:130] > # read_only = false
	I1005 21:39:22.315715 1518222 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1005 21:39:22.315727 1518222 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1005 21:39:22.315733 1518222 command_runner.go:130] > # live configuration reload.
	I1005 21:39:22.315738 1518222 command_runner.go:130] > # log_level = "info"
	I1005 21:39:22.315745 1518222 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1005 21:39:22.315751 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:39:22.315759 1518222 command_runner.go:130] > # log_filter = ""
	I1005 21:39:22.315767 1518222 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1005 21:39:22.315779 1518222 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1005 21:39:22.315784 1518222 command_runner.go:130] > # separated by comma.
	I1005 21:39:22.315797 1518222 command_runner.go:130] > # uid_mappings = ""
	I1005 21:39:22.315804 1518222 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1005 21:39:22.315815 1518222 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1005 21:39:22.315821 1518222 command_runner.go:130] > # separated by comma.
	I1005 21:39:22.315825 1518222 command_runner.go:130] > # gid_mappings = ""
	I1005 21:39:22.315836 1518222 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1005 21:39:22.315847 1518222 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 21:39:22.315854 1518222 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 21:39:22.315864 1518222 command_runner.go:130] > # minimum_mappable_uid = -1
	I1005 21:39:22.315871 1518222 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1005 21:39:22.315879 1518222 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1005 21:39:22.315890 1518222 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1005 21:39:22.315896 1518222 command_runner.go:130] > # minimum_mappable_gid = -1
	I1005 21:39:22.315903 1518222 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1005 21:39:22.315911 1518222 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1005 21:39:22.315918 1518222 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1005 21:39:22.315929 1518222 command_runner.go:130] > # ctr_stop_timeout = 30
	I1005 21:39:22.315937 1518222 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1005 21:39:22.315955 1518222 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1005 21:39:22.315965 1518222 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1005 21:39:22.315971 1518222 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1005 21:39:22.315976 1518222 command_runner.go:130] > # drop_infra_ctr = true
	I1005 21:39:22.315991 1518222 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1005 21:39:22.315998 1518222 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1005 21:39:22.316007 1518222 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1005 21:39:22.316016 1518222 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1005 21:39:22.316024 1518222 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1005 21:39:22.316034 1518222 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1005 21:39:22.316040 1518222 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1005 21:39:22.316053 1518222 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1005 21:39:22.316058 1518222 command_runner.go:130] > # pinns_path = ""
	I1005 21:39:22.316070 1518222 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1005 21:39:22.316078 1518222 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1005 21:39:22.316085 1518222 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1005 21:39:22.316095 1518222 command_runner.go:130] > # default_runtime = "runc"
	I1005 21:39:22.316102 1518222 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1005 21:39:22.316112 1518222 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1005 21:39:22.316127 1518222 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1005 21:39:22.316133 1518222 command_runner.go:130] > # creation as a file is not desired either.
	I1005 21:39:22.316143 1518222 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1005 21:39:22.316149 1518222 command_runner.go:130] > # the hostname is being managed dynamically.
	I1005 21:39:22.316155 1518222 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1005 21:39:22.316159 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.316167 1518222 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1005 21:39:22.316174 1518222 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1005 21:39:22.316182 1518222 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1005 21:39:22.316192 1518222 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1005 21:39:22.316200 1518222 command_runner.go:130] > #
	I1005 21:39:22.316206 1518222 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1005 21:39:22.316212 1518222 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1005 21:39:22.316221 1518222 command_runner.go:130] > #  runtime_type = "oci"
	I1005 21:39:22.316227 1518222 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1005 21:39:22.316238 1518222 command_runner.go:130] > #  privileged_without_host_devices = false
	I1005 21:39:22.316244 1518222 command_runner.go:130] > #  allowed_annotations = []
	I1005 21:39:22.316250 1518222 command_runner.go:130] > # Where:
	I1005 21:39:22.316257 1518222 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1005 21:39:22.316265 1518222 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1005 21:39:22.316273 1518222 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1005 21:39:22.316286 1518222 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1005 21:39:22.316291 1518222 command_runner.go:130] > #   in $PATH.
	I1005 21:39:22.316299 1518222 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1005 21:39:22.316308 1518222 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1005 21:39:22.316328 1518222 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1005 21:39:22.316337 1518222 command_runner.go:130] > #   state.
	I1005 21:39:22.316345 1518222 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1005 21:39:22.316352 1518222 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1005 21:39:22.316360 1518222 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1005 21:39:22.316366 1518222 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1005 21:39:22.316376 1518222 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1005 21:39:22.316385 1518222 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1005 21:39:22.316394 1518222 command_runner.go:130] > #   The currently recognized values are:
	I1005 21:39:22.316403 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1005 21:39:22.316413 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1005 21:39:22.316424 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1005 21:39:22.316432 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1005 21:39:22.316441 1518222 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1005 21:39:22.316451 1518222 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1005 21:39:22.316459 1518222 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1005 21:39:22.316470 1518222 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1005 21:39:22.316477 1518222 command_runner.go:130] > #   should be moved to the container's cgroup
	I1005 21:39:22.316486 1518222 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1005 21:39:22.316492 1518222 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1005 21:39:22.316502 1518222 command_runner.go:130] > runtime_type = "oci"
	I1005 21:39:22.316508 1518222 command_runner.go:130] > runtime_root = "/run/runc"
	I1005 21:39:22.316513 1518222 command_runner.go:130] > runtime_config_path = ""
	I1005 21:39:22.316518 1518222 command_runner.go:130] > monitor_path = ""
	I1005 21:39:22.316523 1518222 command_runner.go:130] > monitor_cgroup = ""
	I1005 21:39:22.316528 1518222 command_runner.go:130] > monitor_exec_cgroup = ""
	I1005 21:39:22.316578 1518222 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1005 21:39:22.316588 1518222 command_runner.go:130] > # running containers
	I1005 21:39:22.316597 1518222 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1005 21:39:22.316605 1518222 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1005 21:39:22.316613 1518222 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1005 21:39:22.316625 1518222 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1005 21:39:22.316632 1518222 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1005 21:39:22.316642 1518222 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1005 21:39:22.316648 1518222 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1005 21:39:22.316653 1518222 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1005 21:39:22.316663 1518222 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1005 21:39:22.316669 1518222 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1005 21:39:22.316676 1518222 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1005 21:39:22.316774 1518222 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1005 21:39:22.316784 1518222 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1005 21:39:22.316793 1518222 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1005 21:39:22.316809 1518222 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1005 21:39:22.316823 1518222 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1005 21:39:22.316846 1518222 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1005 21:39:22.316857 1518222 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1005 21:39:22.316866 1518222 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1005 21:39:22.316875 1518222 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1005 21:39:22.316884 1518222 command_runner.go:130] > # Example:
	I1005 21:39:22.316890 1518222 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1005 21:39:22.316896 1518222 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1005 21:39:22.316906 1518222 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1005 21:39:22.316913 1518222 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1005 21:39:22.316922 1518222 command_runner.go:130] > # cpuset = 0
	I1005 21:39:22.316928 1518222 command_runner.go:130] > # cpushares = "0-1"
	I1005 21:39:22.316932 1518222 command_runner.go:130] > # Where:
	I1005 21:39:22.316945 1518222 command_runner.go:130] > # The workload name is workload-type.
	I1005 21:39:22.316954 1518222 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1005 21:39:22.316961 1518222 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1005 21:39:22.316968 1518222 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1005 21:39:22.316983 1518222 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1005 21:39:22.316990 1518222 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1005 21:39:22.316998 1518222 command_runner.go:130] > # 
	I1005 21:39:22.317006 1518222 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1005 21:39:22.317018 1518222 command_runner.go:130] > #
	I1005 21:39:22.317026 1518222 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1005 21:39:22.317037 1518222 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1005 21:39:22.317045 1518222 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1005 21:39:22.317053 1518222 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1005 21:39:22.317060 1518222 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1005 21:39:22.317070 1518222 command_runner.go:130] > [crio.image]
	I1005 21:39:22.317077 1518222 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1005 21:39:22.317083 1518222 command_runner.go:130] > # default_transport = "docker://"
	I1005 21:39:22.317094 1518222 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1005 21:39:22.317102 1518222 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1005 21:39:22.317107 1518222 command_runner.go:130] > # global_auth_file = ""
	I1005 21:39:22.317113 1518222 command_runner.go:130] > # The image used to instantiate infra containers.
	I1005 21:39:22.317120 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:39:22.317125 1518222 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1005 21:39:22.317133 1518222 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1005 21:39:22.317141 1518222 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1005 21:39:22.317149 1518222 command_runner.go:130] > # This option supports live configuration reload.
	I1005 21:39:22.317158 1518222 command_runner.go:130] > # pause_image_auth_file = ""
	I1005 21:39:22.317169 1518222 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1005 21:39:22.317176 1518222 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1005 21:39:22.317187 1518222 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1005 21:39:22.317194 1518222 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1005 21:39:22.317204 1518222 command_runner.go:130] > # pause_command = "/pause"
	I1005 21:39:22.317211 1518222 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1005 21:39:22.317219 1518222 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1005 21:39:22.317228 1518222 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1005 21:39:22.317238 1518222 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1005 21:39:22.317245 1518222 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1005 21:39:22.317253 1518222 command_runner.go:130] > # signature_policy = ""
	I1005 21:39:22.317268 1518222 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1005 21:39:22.317280 1518222 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1005 21:39:22.317285 1518222 command_runner.go:130] > # changing them here.
	I1005 21:39:22.317294 1518222 command_runner.go:130] > # insecure_registries = [
	I1005 21:39:22.317298 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.317306 1518222 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1005 21:39:22.317314 1518222 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1005 21:39:22.317320 1518222 command_runner.go:130] > # image_volumes = "mkdir"
	I1005 21:39:22.317328 1518222 command_runner.go:130] > # Temporary directory to use for storing big files
	I1005 21:39:22.317352 1518222 command_runner.go:130] > # big_files_temporary_dir = ""
	I1005 21:39:22.317361 1518222 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1005 21:39:22.317370 1518222 command_runner.go:130] > # CNI plugins.
	I1005 21:39:22.317375 1518222 command_runner.go:130] > [crio.network]
	I1005 21:39:22.317383 1518222 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1005 21:39:22.317392 1518222 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1005 21:39:22.317398 1518222 command_runner.go:130] > # cni_default_network = ""
	I1005 21:39:22.317405 1518222 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1005 21:39:22.317413 1518222 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1005 21:39:22.317420 1518222 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1005 21:39:22.317431 1518222 command_runner.go:130] > # plugin_dirs = [
	I1005 21:39:22.317436 1518222 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1005 21:39:22.317440 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.317453 1518222 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1005 21:39:22.317457 1518222 command_runner.go:130] > [crio.metrics]
	I1005 21:39:22.317470 1518222 command_runner.go:130] > # Globally enable or disable metrics support.
	I1005 21:39:22.317475 1518222 command_runner.go:130] > # enable_metrics = false
	I1005 21:39:22.317481 1518222 command_runner.go:130] > # Specify enabled metrics collectors.
	I1005 21:39:22.317487 1518222 command_runner.go:130] > # Per default all metrics are enabled.
	I1005 21:39:22.317494 1518222 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1005 21:39:22.317508 1518222 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1005 21:39:22.317515 1518222 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1005 21:39:22.317523 1518222 command_runner.go:130] > # metrics_collectors = [
	I1005 21:39:22.317528 1518222 command_runner.go:130] > # 	"operations",
	I1005 21:39:22.317534 1518222 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1005 21:39:22.317543 1518222 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1005 21:39:22.317548 1518222 command_runner.go:130] > # 	"operations_errors",
	I1005 21:39:22.317553 1518222 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1005 21:39:22.317563 1518222 command_runner.go:130] > # 	"image_pulls_by_name",
	I1005 21:39:22.317569 1518222 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1005 21:39:22.317574 1518222 command_runner.go:130] > # 	"image_pulls_failures",
	I1005 21:39:22.317579 1518222 command_runner.go:130] > # 	"image_pulls_successes",
	I1005 21:39:22.317584 1518222 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1005 21:39:22.317594 1518222 command_runner.go:130] > # 	"image_layer_reuse",
	I1005 21:39:22.317600 1518222 command_runner.go:130] > # 	"containers_oom_total",
	I1005 21:39:22.317609 1518222 command_runner.go:130] > # 	"containers_oom",
	I1005 21:39:22.317615 1518222 command_runner.go:130] > # 	"processes_defunct",
	I1005 21:39:22.317620 1518222 command_runner.go:130] > # 	"operations_total",
	I1005 21:39:22.317630 1518222 command_runner.go:130] > # 	"operations_latency_seconds",
	I1005 21:39:22.317635 1518222 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1005 21:39:22.317641 1518222 command_runner.go:130] > # 	"operations_errors_total",
	I1005 21:39:22.317652 1518222 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1005 21:39:22.317660 1518222 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1005 21:39:22.317666 1518222 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1005 21:39:22.317671 1518222 command_runner.go:130] > # 	"image_pulls_success_total",
	I1005 21:39:22.317676 1518222 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1005 21:39:22.317683 1518222 command_runner.go:130] > # 	"containers_oom_count_total",
	I1005 21:39:22.317688 1518222 command_runner.go:130] > # ]
	I1005 21:39:22.317700 1518222 command_runner.go:130] > # The port on which the metrics server will listen.
	I1005 21:39:22.317706 1518222 command_runner.go:130] > # metrics_port = 9090
	I1005 21:39:22.317718 1518222 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1005 21:39:22.317725 1518222 command_runner.go:130] > # metrics_socket = ""
	I1005 21:39:22.317735 1518222 command_runner.go:130] > # The certificate for the secure metrics server.
	I1005 21:39:22.317742 1518222 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1005 21:39:22.317750 1518222 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1005 21:39:22.317756 1518222 command_runner.go:130] > # certificate on any modification event.
	I1005 21:39:22.317763 1518222 command_runner.go:130] > # metrics_cert = ""
	I1005 21:39:22.317770 1518222 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1005 21:39:22.317779 1518222 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1005 21:39:22.317785 1518222 command_runner.go:130] > # metrics_key = ""
	I1005 21:39:22.317792 1518222 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1005 21:39:22.317801 1518222 command_runner.go:130] > [crio.tracing]
	I1005 21:39:22.317807 1518222 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1005 21:39:22.317819 1518222 command_runner.go:130] > # enable_tracing = false
	I1005 21:39:22.317825 1518222 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1005 21:39:22.317831 1518222 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1005 21:39:22.317837 1518222 command_runner.go:130] > # Number of samples to collect per million spans.
	I1005 21:39:22.317843 1518222 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1005 21:39:22.317854 1518222 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1005 21:39:22.317865 1518222 command_runner.go:130] > [crio.stats]
	I1005 21:39:22.317873 1518222 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1005 21:39:22.317883 1518222 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1005 21:39:22.317889 1518222 command_runner.go:130] > # stats_collection_period = 0
	I1005 21:39:22.317993 1518222 cni.go:84] Creating CNI manager for ""
	I1005 21:39:22.318004 1518222 cni.go:136] 2 nodes found, recommending kindnet
	I1005 21:39:22.318013 1518222 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:39:22.318032 1518222 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-814558 NodeName:multinode-814558-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 21:39:22.318159 1518222 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-814558-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:39:22.318215 1518222 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-814558-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-814558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:39:22.318285 1518222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 21:39:22.328232 1518222 command_runner.go:130] > kubeadm
	I1005 21:39:22.328250 1518222 command_runner.go:130] > kubectl
	I1005 21:39:22.328255 1518222 command_runner.go:130] > kubelet
	I1005 21:39:22.329672 1518222 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:39:22.329739 1518222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1005 21:39:22.340292 1518222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1005 21:39:22.363728 1518222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 21:39:22.386819 1518222 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:39:22.391522 1518222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1005 21:39:22.405254 1518222 host.go:66] Checking if "multinode-814558" exists ...
	I1005 21:39:22.405526 1518222 config.go:182] Loaded profile config "multinode-814558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:39:22.405550 1518222 start.go:304] JoinCluster: &{Name:multinode-814558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-814558 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:39:22.405636 1518222 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1005 21:39:22.405685 1518222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:39:22.423952 1518222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:39:22.601810 1518222 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9q2nwl.8yiu35r7e2othql3 --discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d 
	I1005 21:39:22.601855 1518222 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1005 21:39:22.601883 1518222 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9q2nwl.8yiu35r7e2othql3 --discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-814558-m02"
	I1005 21:39:22.650050 1518222 command_runner.go:130] > [preflight] Running pre-flight checks
	I1005 21:39:22.687169 1518222 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1005 21:39:22.687195 1518222 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-aws
	I1005 21:39:22.687202 1518222 command_runner.go:130] > OS: Linux
	I1005 21:39:22.687209 1518222 command_runner.go:130] > CGROUPS_CPU: enabled
	I1005 21:39:22.687217 1518222 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1005 21:39:22.687223 1518222 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1005 21:39:22.687229 1518222 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1005 21:39:22.687240 1518222 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1005 21:39:22.687246 1518222 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1005 21:39:22.687262 1518222 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1005 21:39:22.687281 1518222 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1005 21:39:22.687292 1518222 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1005 21:39:22.805355 1518222 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1005 21:39:22.805430 1518222 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1005 21:39:22.836835 1518222 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1005 21:39:22.837150 1518222 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1005 21:39:22.837187 1518222 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1005 21:39:22.935157 1518222 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1005 21:39:26.450567 1518222 command_runner.go:130] > This node has joined the cluster:
	I1005 21:39:26.450610 1518222 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1005 21:39:26.450618 1518222 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1005 21:39:26.450630 1518222 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1005 21:39:26.454284 1518222 command_runner.go:130] ! W1005 21:39:22.649579    1023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1005 21:39:26.454315 1518222 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1005 21:39:26.454331 1518222 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1005 21:39:26.454352 1518222 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9q2nwl.8yiu35r7e2othql3 --discovery-token-ca-cert-hash sha256:fc3fbe8f8e38b68917c98c9db2374d5c4f1029807147531a9bd59ccd386fb68d --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-814558-m02": (3.852454274s)
	I1005 21:39:26.454368 1518222 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1005 21:39:26.711345 1518222 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1005 21:39:26.711376 1518222 start.go:306] JoinCluster complete in 4.305827469s
	I1005 21:39:26.711388 1518222 cni.go:84] Creating CNI manager for ""
	I1005 21:39:26.711394 1518222 cni.go:136] 2 nodes found, recommending kindnet
	I1005 21:39:26.711446 1518222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:39:26.716925 1518222 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1005 21:39:26.716949 1518222 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1005 21:39:26.716957 1518222 command_runner.go:130] > Device: 3ah/58d	Inode: 5453116     Links: 1
	I1005 21:39:26.716965 1518222 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1005 21:39:26.716980 1518222 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1005 21:39:26.716986 1518222 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1005 21:39:26.716992 1518222 command_runner.go:130] > Change: 2023-10-05 21:15:16.567757178 +0000
	I1005 21:39:26.717003 1518222 command_runner.go:130] >  Birth: 2023-10-05 21:15:16.523757341 +0000
	I1005 21:39:26.717409 1518222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 21:39:26.717428 1518222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:39:26.745123 1518222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:39:27.158630 1518222 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1005 21:39:27.164950 1518222 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1005 21:39:27.168787 1518222 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1005 21:39:27.198707 1518222 command_runner.go:130] > daemonset.apps/kindnet configured
	I1005 21:39:27.199534 1518222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:39:27.199796 1518222 kapi.go:59] client config for multinode-814558: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:39:27.200119 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1005 21:39:27.200135 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:27.200145 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:27.200152 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:27.205194 1518222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1005 21:39:27.205224 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:27.205234 1518222 round_trippers.go:580]     Content-Length: 291
	I1005 21:39:27.205241 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:27 GMT
	I1005 21:39:27.205247 1518222 round_trippers.go:580]     Audit-Id: 1db4f568-44fa-47ce-8947-e5f3abde74af
	I1005 21:39:27.205253 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:27.205260 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:27.205266 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:27.205272 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:27.205999 1518222 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5545a75a-e1ab-458a-8428-11a477671681","resourceVersion":"413","creationTimestamp":"2023-10-05T21:38:24Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1005 21:39:27.206113 1518222 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-814558" context rescaled to 1 replicas
	I1005 21:39:27.206147 1518222 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1005 21:39:27.208299 1518222 out.go:177] * Verifying Kubernetes components...
	I1005 21:39:27.210274 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:39:27.231812 1518222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:39:27.232086 1518222 kapi.go:59] client config for multinode-814558: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/multinode-814558/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:39:27.232542 1518222 node_ready.go:35] waiting up to 6m0s for node "multinode-814558-m02" to be "Ready" ...
	I1005 21:39:27.232632 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:27.232715 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:27.232736 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:27.232746 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:27.238591 1518222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1005 21:39:27.238618 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:27.238628 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:27.238642 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:27.238649 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:27 GMT
	I1005 21:39:27.238656 1518222 round_trippers.go:580]     Audit-Id: dff786b2-5982-46aa-ad96-b9cf98f31c47
	I1005 21:39:27.238666 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:27.238672 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:27.239286 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:27.239767 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:27.239783 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:27.239792 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:27.239805 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:27.242544 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:27.242569 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:27.242577 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:27.242584 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:27 GMT
	I1005 21:39:27.242590 1518222 round_trippers.go:580]     Audit-Id: 9a595fa5-975a-4d03-b8ff-2ae5c7bf2ea0
	I1005 21:39:27.242596 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:27.242602 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:27.242609 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:27.243146 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:27.744258 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:27.744284 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:27.744295 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:27.744302 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:27.746875 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:27.746949 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:27.746998 1518222 round_trippers.go:580]     Audit-Id: 58649101-d990-48c9-8431-01c60b303067
	I1005 21:39:27.747040 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:27.747061 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:27.747082 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:27.747117 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:27.747142 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:27 GMT
	I1005 21:39:27.747315 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:28.243733 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:28.243757 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:28.243768 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:28.243775 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:28.246403 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:28.246430 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:28.246439 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:28.246447 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:28.246453 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:28.246459 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:28 GMT
	I1005 21:39:28.246466 1518222 round_trippers.go:580]     Audit-Id: 7dd04a39-2812-4649-8c64-306a0da9f13e
	I1005 21:39:28.246472 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:28.246587 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:28.744647 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:28.744672 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:28.744681 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:28.744688 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:28.747383 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:28.747409 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:28.747434 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:28.747443 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:28 GMT
	I1005 21:39:28.747450 1518222 round_trippers.go:580]     Audit-Id: 7ec5cc00-68f2-4d1d-95c7-2401bd5e38d1
	I1005 21:39:28.747459 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:28.747465 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:28.747471 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:28.747625 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:29.243807 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:29.243832 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:29.243843 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:29.243851 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:29.246439 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:29.246461 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:29.246469 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:29.246476 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:29.246482 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:29.246492 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:29 GMT
	I1005 21:39:29.246499 1518222 round_trippers.go:580]     Audit-Id: 41f0ff1a-5a44-4ebd-a49f-f5be086de9a9
	I1005 21:39:29.246505 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:29.246680 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:29.247081 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:29.743936 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:29.743959 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:29.743968 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:29.743976 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:29.746520 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:29.746546 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:29.746555 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:29 GMT
	I1005 21:39:29.746562 1518222 round_trippers.go:580]     Audit-Id: d6ecdb4f-835c-4506-ae79-3afc54a20fdb
	I1005 21:39:29.746568 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:29.746575 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:29.746585 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:29.746595 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:29.746742 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:30.243841 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:30.243868 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:30.243880 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:30.243888 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:30.247094 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:30.247120 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:30.247132 1518222 round_trippers.go:580]     Audit-Id: c59d0a5e-a2b8-495c-953e-7bcd41fef6a0
	I1005 21:39:30.247139 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:30.247145 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:30.247152 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:30.247158 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:30.247165 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:30 GMT
	I1005 21:39:30.247269 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:30.743944 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:30.743969 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:30.743978 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:30.743986 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:30.746831 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:30.746863 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:30.746874 1518222 round_trippers.go:580]     Audit-Id: 9e658bb9-ecef-4c26-af29-26d415eb4240
	I1005 21:39:30.746880 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:30.746888 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:30.746894 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:30.746901 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:30.746911 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:30 GMT
	I1005 21:39:30.747117 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:31.243880 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:31.243905 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:31.243915 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:31.243923 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:31.246604 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:31.246666 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:31.246699 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:31 GMT
	I1005 21:39:31.246721 1518222 round_trippers.go:580]     Audit-Id: 7f31af84-78b9-4354-a1e7-41cde0c2793d
	I1005 21:39:31.246747 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:31.246755 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:31.246763 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:31.246770 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:31.246944 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:31.247326 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:31.743975 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:31.743995 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:31.744005 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:31.744013 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:31.746642 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:31.746712 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:31.746737 1518222 round_trippers.go:580]     Audit-Id: de1fe0ed-2967-4f94-b3e0-68edc0b3c3e3
	I1005 21:39:31.746760 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:31.746798 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:31.746818 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:31.746832 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:31.746839 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:31 GMT
	I1005 21:39:31.747018 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:32.244372 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:32.244393 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:32.244402 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:32.244410 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:32.246996 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:32.247021 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:32.247030 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:32.247037 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:32.247044 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:32 GMT
	I1005 21:39:32.247050 1518222 round_trippers.go:580]     Audit-Id: 9db32d53-cb80-4cbd-b076-c04794d55be5
	I1005 21:39:32.247056 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:32.247063 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:32.247183 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:32.744637 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:32.744665 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:32.744675 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:32.744693 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:32.748068 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:32.748090 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:32.748101 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:32.748108 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:32 GMT
	I1005 21:39:32.748114 1518222 round_trippers.go:580]     Audit-Id: 043143d4-accf-4256-8553-79f66c0a7b35
	I1005 21:39:32.748124 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:32.748137 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:32.748144 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:32.748316 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:33.244550 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:33.244575 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:33.244585 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:33.244592 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:33.247129 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:33.247152 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:33.247161 1518222 round_trippers.go:580]     Audit-Id: fc4e8b4b-1e0c-40cd-8ba8-85f931b078d3
	I1005 21:39:33.247168 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:33.247177 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:33.247183 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:33.247190 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:33.247197 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:33 GMT
	I1005 21:39:33.247376 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:33.247753 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:33.743751 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:33.743776 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:33.743786 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:33.743793 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:33.746509 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:33.746530 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:33.746539 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:33.746546 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:33.746552 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:33 GMT
	I1005 21:39:33.746559 1518222 round_trippers.go:580]     Audit-Id: f5b97f29-aec1-48ed-abfb-7b09c5b2a87a
	I1005 21:39:33.746565 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:33.746571 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:33.746727 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:34.243798 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:34.243825 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:34.243834 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:34.243842 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:34.246407 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:34.246426 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:34.246435 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:34.246442 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:34.246449 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:34.246455 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:34.246461 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:34 GMT
	I1005 21:39:34.246469 1518222 round_trippers.go:580]     Audit-Id: 6fa4f7da-890c-4907-a88e-65d975f70ac6
	I1005 21:39:34.246601 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:34.744677 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:34.744701 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:34.744710 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:34.744717 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:34.747330 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:34.747355 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:34.747364 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:34.747370 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:34.747376 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:34.747383 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:34.747390 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:34 GMT
	I1005 21:39:34.747401 1518222 round_trippers.go:580]     Audit-Id: 61eeda82-cb23-4932-a08c-9883f9a63d5d
	I1005 21:39:34.747582 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:35.244763 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:35.244789 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:35.244799 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:35.244807 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:35.247526 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:35.247548 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:35.247557 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:35.247564 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:35.247570 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:35.247576 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:35.247582 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:35 GMT
	I1005 21:39:35.247588 1518222 round_trippers.go:580]     Audit-Id: 4218bd0e-f30e-4b53-8c22-9f68b35f58a5
	I1005 21:39:35.247725 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:35.248099 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:35.743756 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:35.743799 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:35.743809 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:35.743830 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:35.746277 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:35.746304 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:35.746313 1518222 round_trippers.go:580]     Audit-Id: 1d9d4974-05de-4058-a4f5-6e704ed679cb
	I1005 21:39:35.746322 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:35.746328 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:35.746336 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:35.746343 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:35.746350 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:35 GMT
	I1005 21:39:35.746479 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:36.244480 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:36.244503 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:36.244513 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:36.244520 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:36.248068 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:36.248095 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:36.248105 1518222 round_trippers.go:580]     Audit-Id: 792e9baa-8194-48b0-b555-e9bd252046c5
	I1005 21:39:36.248112 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:36.248118 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:36.248125 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:36.248132 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:36.248138 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:36 GMT
	I1005 21:39:36.248399 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"455","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1005 21:39:36.744556 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:36.744579 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:36.744597 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:36.744605 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:36.747642 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:36.747669 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:36.747678 1518222 round_trippers.go:580]     Audit-Id: 8b7b4e1f-118d-4dc4-a3f0-ba04637e0a68
	I1005 21:39:36.747685 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:36.747691 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:36.747697 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:36.747703 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:36.747711 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:36 GMT
	I1005 21:39:36.747863 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:37.244579 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:37.244604 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:37.244613 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:37.244621 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:37.247127 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:37.247153 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:37.247162 1518222 round_trippers.go:580]     Audit-Id: 322ccecb-b4c0-4a7d-b216-a69f8e919261
	I1005 21:39:37.247169 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:37.247175 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:37.247181 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:37.247187 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:37.247194 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:37 GMT
	I1005 21:39:37.247624 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:37.743899 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:37.743921 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:37.743935 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:37.743943 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:37.746713 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:37.746740 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:37.746749 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:37.746757 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:37.746772 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:37 GMT
	I1005 21:39:37.746782 1518222 round_trippers.go:580]     Audit-Id: 38493f34-fea4-466c-87e2-e21a46a68566
	I1005 21:39:37.746789 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:37.746795 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:37.747184 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:37.747634 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:38.243938 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:38.243960 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:38.243970 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:38.243977 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:38.246657 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:38.246678 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:38.246686 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:38 GMT
	I1005 21:39:38.246693 1518222 round_trippers.go:580]     Audit-Id: fa4438fa-c068-482a-809c-532863c3336f
	I1005 21:39:38.246699 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:38.246705 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:38.246711 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:38.246718 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:38.246831 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:38.744166 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:38.744190 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:38.744200 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:38.744213 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:38.746797 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:38.746826 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:38.746836 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:38.746843 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:38.746849 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:38.746856 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:38.746863 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:38 GMT
	I1005 21:39:38.746876 1518222 round_trippers.go:580]     Audit-Id: 63a02cfd-7fc6-479a-b223-ba9a1e29de60
	I1005 21:39:38.747103 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:39.244190 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:39.244213 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:39.244223 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:39.244230 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:39.246893 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:39.246915 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:39.246924 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:39.246930 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:39 GMT
	I1005 21:39:39.246937 1518222 round_trippers.go:580]     Audit-Id: 711c83ab-9004-441e-b248-9a80d26f30bb
	I1005 21:39:39.246943 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:39.246950 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:39.246956 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:39.247342 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:39.744124 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:39.744146 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:39.744156 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:39.744163 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:39.746625 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:39.746649 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:39.746658 1518222 round_trippers.go:580]     Audit-Id: fd82a4c9-dbce-4530-a167-5a26137eadaf
	I1005 21:39:39.746665 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:39.746671 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:39.746678 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:39.746688 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:39.746697 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:39 GMT
	I1005 21:39:39.746920 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:40.243986 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:40.244019 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:40.244039 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:40.244046 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:40.246720 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:40.246747 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:40.246756 1518222 round_trippers.go:580]     Audit-Id: ef2d31c2-b2ba-4097-8bfa-4d274ef82892
	I1005 21:39:40.246763 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:40.246769 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:40.246775 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:40.246782 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:40.246789 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:40 GMT
	I1005 21:39:40.247154 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:40.247569 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:40.744356 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:40.744378 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:40.744388 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:40.744395 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:40.747065 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:40.747090 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:40.747100 1518222 round_trippers.go:580]     Audit-Id: 93604c80-8130-443f-a983-521c60bbdb3b
	I1005 21:39:40.747106 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:40.747113 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:40.747121 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:40.747128 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:40.747135 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:40 GMT
	I1005 21:39:40.747297 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:41.244299 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:41.244324 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:41.244333 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:41.244340 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:41.247093 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:41.247118 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:41.247127 1518222 round_trippers.go:580]     Audit-Id: 79f0480b-97f5-43bd-ac66-a12892e9602a
	I1005 21:39:41.247133 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:41.247140 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:41.247146 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:41.247153 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:41.247160 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:41 GMT
	I1005 21:39:41.247255 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:41.743793 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:41.743818 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:41.743828 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:41.743835 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:41.746384 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:41.746404 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:41.746412 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:41 GMT
	I1005 21:39:41.746419 1518222 round_trippers.go:580]     Audit-Id: 04998a6e-cdeb-444d-b31a-4507d44ea61d
	I1005 21:39:41.746425 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:41.746431 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:41.746439 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:41.746446 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:41.746574 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:42.244820 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:42.244847 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:42.244865 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:42.244872 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:42.249110 1518222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1005 21:39:42.249137 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:42.249146 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:42 GMT
	I1005 21:39:42.249153 1518222 round_trippers.go:580]     Audit-Id: 50cddc90-9dd4-40b6-baf0-bcd606a71eba
	I1005 21:39:42.249161 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:42.249167 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:42.249174 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:42.249180 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:42.249774 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:42.250190 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:42.744095 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:42.744116 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:42.744125 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:42.744133 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:42.746884 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:42.746906 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:42.746914 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:42.746921 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:42.746927 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:42.746933 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:42.746940 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:42 GMT
	I1005 21:39:42.746948 1518222 round_trippers.go:580]     Audit-Id: 6b218342-032a-43ab-974b-142dca01ee2a
	I1005 21:39:42.747130 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:43.243703 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:43.243727 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:43.243737 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:43.243745 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:43.246255 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:43.246275 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:43.246284 1518222 round_trippers.go:580]     Audit-Id: 3da97e8b-1fd1-45fe-9580-46ccdbd23707
	I1005 21:39:43.246290 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:43.246297 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:43.246303 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:43.246310 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:43.246316 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:43 GMT
	I1005 21:39:43.246421 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:43.743741 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:43.743762 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:43.743772 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:43.743779 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:43.746437 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:43.746467 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:43.746476 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:43.746483 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:43 GMT
	I1005 21:39:43.746489 1518222 round_trippers.go:580]     Audit-Id: f7e033c0-522c-4dd3-b9eb-c3afd2860fa2
	I1005 21:39:43.746496 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:43.746502 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:43.746509 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:43.746636 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:44.244513 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:44.244539 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:44.244549 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:44.244557 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:44.247871 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:44.247899 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:44.247908 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:44.247915 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:44.247922 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:44 GMT
	I1005 21:39:44.247928 1518222 round_trippers.go:580]     Audit-Id: 701fe6b8-3152-455f-a206-8cc359358df2
	I1005 21:39:44.247934 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:44.247942 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:44.248200 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:44.743776 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:44.743802 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:44.743811 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:44.743818 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:44.746780 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:44.746807 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:44.746816 1518222 round_trippers.go:580]     Audit-Id: f141c782-2703-47bd-9128-91bd0fc64cf4
	I1005 21:39:44.746822 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:44.746829 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:44.746835 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:44.746841 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:44.746849 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:44 GMT
	I1005 21:39:44.746971 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:44.747352 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:45.262362 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:45.262392 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:45.262404 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:45.262413 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:45.270076 1518222 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1005 21:39:45.270102 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:45.270112 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:45.270119 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:45.270127 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:45 GMT
	I1005 21:39:45.270134 1518222 round_trippers.go:580]     Audit-Id: 14e776ca-90a2-4c2c-8571-98a50b7e3d57
	I1005 21:39:45.270141 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:45.270147 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:45.270241 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:45.743758 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:45.743783 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:45.743793 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:45.743800 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:45.746511 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:45.746537 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:45.746546 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:45.746552 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:45.746559 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:45 GMT
	I1005 21:39:45.746566 1518222 round_trippers.go:580]     Audit-Id: 232755db-66fd-4862-9a2e-0ecc3238b6e3
	I1005 21:39:45.746574 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:45.746585 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:45.746709 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:46.243845 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:46.243869 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:46.243879 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:46.243886 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:46.246877 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:46.246901 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:46.246910 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:46.246916 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:46.246923 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:46.246929 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:46 GMT
	I1005 21:39:46.246935 1518222 round_trippers.go:580]     Audit-Id: c74a4e74-750e-4cea-9db7-272a5606617e
	I1005 21:39:46.246942 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:46.247043 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:46.744066 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:46.744090 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:46.744100 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:46.744107 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:46.746625 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:46.746650 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:46.746660 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:46.746667 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:46.746674 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:46.746680 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:46.746691 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:46 GMT
	I1005 21:39:46.746697 1518222 round_trippers.go:580]     Audit-Id: e9bb146f-efc7-450d-b223-1ed4121183bf
	I1005 21:39:46.747081 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:46.747466 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:47.244137 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:47.244157 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:47.244167 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:47.244174 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:47.246830 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:47.246851 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:47.246859 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:47.246866 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:47.246872 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:47.246880 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:47 GMT
	I1005 21:39:47.246886 1518222 round_trippers.go:580]     Audit-Id: e7857b0c-d7b2-4553-b3b7-4a9851a68204
	I1005 21:39:47.246892 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:47.246995 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:47.743731 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:47.743761 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:47.743772 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:47.743779 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:47.746866 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:47.746892 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:47.746901 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:47.746908 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:47.746914 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:47.746920 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:47.746927 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:47 GMT
	I1005 21:39:47.746934 1518222 round_trippers.go:580]     Audit-Id: 136b3259-662f-4006-b023-ec4bd56c887b
	I1005 21:39:47.747204 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:48.244072 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:48.244095 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:48.244104 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:48.244111 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:48.246868 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:48.246891 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:48.246900 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:48.246907 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:48.246913 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:48 GMT
	I1005 21:39:48.246920 1518222 round_trippers.go:580]     Audit-Id: 084c9466-516a-4d4d-91f9-7c8ece57cce6
	I1005 21:39:48.246926 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:48.246932 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:48.247203 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:48.743940 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:48.743966 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:48.743977 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:48.743984 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:48.746543 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:48.746564 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:48.746572 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:48.746579 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:48.746585 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:48 GMT
	I1005 21:39:48.746592 1518222 round_trippers.go:580]     Audit-Id: ca632067-03a5-4aa7-8a37-2af0ef654ee4
	I1005 21:39:48.746598 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:48.746604 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:48.746788 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:49.243905 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:49.243927 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:49.243937 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:49.243945 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:49.247019 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:49.247046 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:49.247056 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:49.247063 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:49.247069 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:49.247076 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:49 GMT
	I1005 21:39:49.247083 1518222 round_trippers.go:580]     Audit-Id: 9d0c9591-7033-4578-ac5b-2ed4b20bfe2f
	I1005 21:39:49.247089 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:49.247375 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:49.247773 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:49.744524 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:49.744544 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:49.744554 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:49.744561 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:49.747005 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:49.747025 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:49.747033 1518222 round_trippers.go:580]     Audit-Id: fa727ff5-3692-4cad-8975-6e9ae547aac4
	I1005 21:39:49.747039 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:49.747045 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:49.747052 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:49.747058 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:49.747064 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:49 GMT
	I1005 21:39:49.747198 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:50.244471 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:50.244495 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:50.244506 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:50.244513 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:50.247221 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:50.247245 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:50.247254 1518222 round_trippers.go:580]     Audit-Id: c9743ab5-dd7d-45cc-962d-0b7997ecd765
	I1005 21:39:50.247261 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:50.247267 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:50.247274 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:50.247286 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:50.247293 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:50 GMT
	I1005 21:39:50.247622 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:50.744289 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:50.744314 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:50.744324 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:50.744331 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:50.747046 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:50.747069 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:50.747078 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:50.747085 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:50.747091 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:50.747098 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:50.747104 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:50 GMT
	I1005 21:39:50.747111 1518222 round_trippers.go:580]     Audit-Id: e6b4f740-a4cb-4ba6-9951-f0a26529091a
	I1005 21:39:50.747467 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:51.244197 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:51.244219 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:51.244232 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:51.244241 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:51.246784 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:51.246809 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:51.246818 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:51 GMT
	I1005 21:39:51.246825 1518222 round_trippers.go:580]     Audit-Id: 2859d790-a80f-4931-a9be-571760a2c195
	I1005 21:39:51.246831 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:51.246838 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:51.246846 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:51.246854 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:51.247045 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:51.744735 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:51.744757 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:51.744768 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:51.744775 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:51.747294 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:51.747313 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:51.747322 1518222 round_trippers.go:580]     Audit-Id: 5e80ddca-613a-492c-a722-f25b6f4b6455
	I1005 21:39:51.747328 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:51.747335 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:51.747341 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:51.747347 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:51.747353 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:51 GMT
	I1005 21:39:51.747550 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:51.747911 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:52.244571 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:52.244593 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:52.244602 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:52.244610 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:52.247359 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:52.247385 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:52.247394 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:52 GMT
	I1005 21:39:52.247401 1518222 round_trippers.go:580]     Audit-Id: c6303520-1b25-43ed-9037-bfea30020863
	I1005 21:39:52.247407 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:52.247413 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:52.247419 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:52.247426 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:52.247855 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:52.744532 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:52.744553 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:52.744563 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:52.744570 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:52.747131 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:52.747153 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:52.747161 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:52 GMT
	I1005 21:39:52.747168 1518222 round_trippers.go:580]     Audit-Id: ac2a21e8-1870-48fa-b776-110d53fb78f2
	I1005 21:39:52.747174 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:52.747180 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:52.747186 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:52.747193 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:52.747351 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:53.244486 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:53.244512 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:53.244521 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:53.244529 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:53.247182 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:53.247208 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:53.247217 1518222 round_trippers.go:580]     Audit-Id: b8800268-7f4a-4312-8059-190aeff1a149
	I1005 21:39:53.247224 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:53.247230 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:53.247236 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:53.247242 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:53.247248 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:53 GMT
	I1005 21:39:53.247357 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:53.744767 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:53.744793 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:53.744805 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:53.744812 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:53.747375 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:53.747399 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:53.747408 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:53.747414 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:53.747420 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:53.747427 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:53.747433 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:53 GMT
	I1005 21:39:53.747439 1518222 round_trippers.go:580]     Audit-Id: d7b097a8-3807-4df7-8764-ddde12d1bb4b
	I1005 21:39:53.747727 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:53.748111 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:54.243803 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:54.243825 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:54.243836 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:54.243843 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:54.246548 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:54.246575 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:54.246584 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:54.246591 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:54 GMT
	I1005 21:39:54.246598 1518222 round_trippers.go:580]     Audit-Id: 7468389d-bb6d-40cb-a883-2e5412ff9b11
	I1005 21:39:54.246604 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:54.246611 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:54.246617 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:54.246831 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:54.743713 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:54.743735 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:54.743744 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:54.743752 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:54.746379 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:54.746401 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:54.746410 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:54.746417 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:54.746423 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:54.746430 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:54.746436 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:54 GMT
	I1005 21:39:54.746442 1518222 round_trippers.go:580]     Audit-Id: 94cff191-8a1b-41a5-bdcf-bc20cde2c793
	I1005 21:39:54.746636 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:55.244276 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:55.244301 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:55.244321 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:55.244329 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:55.246922 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:55.246947 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:55.246956 1518222 round_trippers.go:580]     Audit-Id: 5f38ed56-4ea2-45ce-9e6b-18a9a1e0097c
	I1005 21:39:55.246963 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:55.246969 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:55.246976 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:55.246982 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:55.246989 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:55 GMT
	I1005 21:39:55.247181 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:55.744321 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:55.744342 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:55.744352 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:55.744359 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:55.746965 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:55.746985 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:55.746993 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:55.747000 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:55.747006 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:55.747012 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:55.747019 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:55 GMT
	I1005 21:39:55.747025 1518222 round_trippers.go:580]     Audit-Id: be5864fb-cf88-4445-865b-2c8a2c0ceaf9
	I1005 21:39:55.747363 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:56.244219 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:56.244242 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:56.244253 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:56.244260 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:56.246676 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:56.246700 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:56.246708 1518222 round_trippers.go:580]     Audit-Id: c786ec8c-48ea-4805-b802-049a14c621d1
	I1005 21:39:56.246717 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:56.246723 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:56.246730 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:56.246736 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:56.246743 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:56 GMT
	I1005 21:39:56.246927 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:56.247300 1518222 node_ready.go:58] node "multinode-814558-m02" has status "Ready":"False"
	I1005 21:39:56.743993 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:56.744020 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:56.744031 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:56.744038 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:56.746999 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:56.747024 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:56.747036 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:56.747046 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:56.747053 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:56.747060 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:56.747069 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:56 GMT
	I1005 21:39:56.747081 1518222 round_trippers.go:580]     Audit-Id: 043f3024-6133-43c8-a173-a91459fed074
	I1005 21:39:56.747580 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:57.244181 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:57.244213 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:57.244223 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:57.244231 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:57.246925 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:57.246948 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:57.246956 1518222 round_trippers.go:580]     Audit-Id: 72f1513a-f7a4-4c75-a820-1aa983dc1cd2
	I1005 21:39:57.246963 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:57.246971 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:57.246978 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:57.246985 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:57.246991 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:57 GMT
	I1005 21:39:57.247151 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:57.743772 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:57.743795 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:57.743804 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:57.743813 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:57.746418 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:57.746454 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:57.746463 1518222 round_trippers.go:580]     Audit-Id: 6dbf83af-f0af-401f-97ce-8247f9d37dcc
	I1005 21:39:57.746470 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:57.746491 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:57.746505 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:57.746511 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:57.746524 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:57 GMT
	I1005 21:39:57.746716 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"475","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I1005 21:39:58.244068 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:58.244090 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.244099 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.244106 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.246547 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.246572 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.246581 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.246588 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.246594 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.246601 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.246609 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.246615 1518222 round_trippers.go:580]     Audit-Id: 158b359c-6628-41ab-8c27-5b1ef94a583d
	I1005 21:39:58.246724 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"498","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1005 21:39:58.247092 1518222 node_ready.go:49] node "multinode-814558-m02" has status "Ready":"True"
	I1005 21:39:58.247108 1518222 node_ready.go:38] duration metric: took 31.014536938s waiting for node "multinode-814558-m02" to be "Ready" ...
	I1005 21:39:58.247119 1518222 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:39:58.247185 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1005 21:39:58.247196 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.247203 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.247210 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.250864 1518222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1005 21:39:58.250889 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.250898 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.250904 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.250910 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.250916 1518222 round_trippers.go:580]     Audit-Id: 705352f4-83d4-4ecb-845c-52ac2111821f
	I1005 21:39:58.250923 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.250930 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.251521 1518222 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"498"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"409","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1005 21:39:58.254541 1518222 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6bvj5" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.254627 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6bvj5
	I1005 21:39:58.254640 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.254650 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.254659 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.257406 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.257428 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.257436 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.257443 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.257449 1518222 round_trippers.go:580]     Audit-Id: 3cedbd35-f29c-439c-a2e6-48876a17190e
	I1005 21:39:58.257456 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.257467 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.257477 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.257800 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6bvj5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0961e1d-4075-4c8e-94d9-9c34564f71df","resourceVersion":"409","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e72f1f53-83f4-4919-913a-aed5f17ec03a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e72f1f53-83f4-4919-913a-aed5f17ec03a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1005 21:39:58.258344 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:58.258362 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.258371 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.258378 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.260827 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.260863 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.260872 1518222 round_trippers.go:580]     Audit-Id: 7763f560-01a9-42f7-802f-b2780a6d0289
	I1005 21:39:58.260885 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.260893 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.260900 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.260907 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.260921 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.261170 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:58.261582 1518222 pod_ready.go:92] pod "coredns-5dd5756b68-6bvj5" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:58.261602 1518222 pod_ready.go:81] duration metric: took 7.032821ms waiting for pod "coredns-5dd5756b68-6bvj5" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.261617 1518222 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.261686 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-814558
	I1005 21:39:58.261694 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.261703 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.261710 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.264306 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.264326 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.264334 1518222 round_trippers.go:580]     Audit-Id: 7b18e641-db1e-4cf1-82b6-60dc6750132d
	I1005 21:39:58.264340 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.264346 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.264353 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.264359 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.264366 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.264611 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-814558","namespace":"kube-system","uid":"f9ec7415-1ccc-4ab0-a62e-855fd2e89920","resourceVersion":"265","creationTimestamp":"2023-10-05T21:38:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"5e488af0d1cc97f30d1e85d9d7859da3","kubernetes.io/config.mirror":"5e488af0d1cc97f30d1e85d9d7859da3","kubernetes.io/config.seen":"2023-10-05T21:38:17.423098181Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1005 21:39:58.265125 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:58.265144 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.265152 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.265159 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.267508 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.267533 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.267541 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.267548 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.267555 1518222 round_trippers.go:580]     Audit-Id: 696997c3-e633-4753-a18c-cea017358da8
	I1005 21:39:58.267563 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.267570 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.267579 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.267706 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:58.268092 1518222 pod_ready.go:92] pod "etcd-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:58.268113 1518222 pod_ready.go:81] duration metric: took 6.484638ms waiting for pod "etcd-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.268129 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.268186 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-814558
	I1005 21:39:58.268197 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.268205 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.268212 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.270587 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.270612 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.270621 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.270628 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.270634 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.270641 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.270647 1518222 round_trippers.go:580]     Audit-Id: 8eb7f8f9-8336-4bf0-977b-6a68f769b1d6
	I1005 21:39:58.270654 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.270777 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-814558","namespace":"kube-system","uid":"5d4b6568-b5be-4a73-b543-87354078f3e7","resourceVersion":"270","creationTimestamp":"2023-10-05T21:38:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"48dffb1033d3aa2f4aa5ffa4543bf256","kubernetes.io/config.mirror":"48dffb1033d3aa2f4aa5ffa4543bf256","kubernetes.io/config.seen":"2023-10-05T21:38:17.423099773Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1005 21:39:58.271290 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:58.271308 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.271317 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.271324 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.273630 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.273663 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.273672 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.273679 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.273685 1518222 round_trippers.go:580]     Audit-Id: 486f8404-137a-4bb8-a05d-555b20d6377a
	I1005 21:39:58.273695 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.273701 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.273712 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.273810 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:58.274219 1518222 pod_ready.go:92] pod "kube-apiserver-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:58.274235 1518222 pod_ready.go:81] duration metric: took 6.099506ms waiting for pod "kube-apiserver-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.274246 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.274351 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-814558
	I1005 21:39:58.274362 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.274370 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.274376 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.276837 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.276866 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.276875 1518222 round_trippers.go:580]     Audit-Id: 491ce84e-4836-4033-8c01-ee117a87c9e8
	I1005 21:39:58.276882 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.276888 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.276894 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.276904 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.276911 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.277043 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-814558","namespace":"kube-system","uid":"e3b6b429-bc4a-460a-9328-17bdb559510d","resourceVersion":"269","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0e1f0f9cedaae17855a8cbeaa7f6b78c","kubernetes.io/config.mirror":"0e1f0f9cedaae17855a8cbeaa7f6b78c","kubernetes.io/config.seen":"2023-10-05T21:38:17.423101110Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1005 21:39:58.277649 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:58.277665 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.277673 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.277680 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.280030 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.280052 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.280060 1518222 round_trippers.go:580]     Audit-Id: 21747699-8646-48dd-b59a-ad773a592026
	I1005 21:39:58.280066 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.280072 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.280080 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.280090 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.280096 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.280205 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:58.280578 1518222 pod_ready.go:92] pod "kube-controller-manager-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:58.280594 1518222 pod_ready.go:81] duration metric: took 6.312429ms waiting for pod "kube-controller-manager-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.280611 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lftrk" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.445006 1518222 request.go:629] Waited for 164.330075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lftrk
	I1005 21:39:58.445093 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lftrk
	I1005 21:39:58.445102 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.445111 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.445123 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.447855 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.447880 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.447888 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.447895 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.447902 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.447915 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.447922 1518222 round_trippers.go:580]     Audit-Id: c9c2ca65-6fff-470a-acf3-cdce0aa454d2
	I1005 21:39:58.447928 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.448050 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lftrk","generateName":"kube-proxy-","namespace":"kube-system","uid":"00a86d93-f9f8-4616-9b0d-639530776c04","resourceVersion":"360","creationTimestamp":"2023-10-05T21:38:37Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"030d1c05-ca2b-42bc-8181-c0109b2fd192","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"030d1c05-ca2b-42bc-8181-c0109b2fd192\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1005 21:39:58.644477 1518222 request.go:629] Waited for 195.92586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:58.644539 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:58.644545 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.644560 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.644574 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.647244 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.647273 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.647284 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.647291 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.647298 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.647305 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.647315 1518222 round_trippers.go:580]     Audit-Id: 5d6c2022-b104-4513-ba2c-71c850e16354
	I1005 21:39:58.647324 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.647437 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:58.647832 1518222 pod_ready.go:92] pod "kube-proxy-lftrk" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:58.647853 1518222 pod_ready.go:81] duration metric: took 367.232207ms waiting for pod "kube-proxy-lftrk" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.647864 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rlvpm" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:58.844140 1518222 request.go:629] Waited for 196.207468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlvpm
	I1005 21:39:58.844243 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlvpm
	I1005 21:39:58.844257 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:58.844266 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:58.844273 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:58.846882 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:58.846907 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:58.846921 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:58.846928 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:58.846952 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:58.846965 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:58.846972 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:58 GMT
	I1005 21:39:58.846978 1518222 round_trippers.go:580]     Audit-Id: 5d084657-c5be-4b1f-a44e-6de18e249f2b
	I1005 21:39:58.847318 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rlvpm","generateName":"kube-proxy-","namespace":"kube-system","uid":"6e3149cb-7db1-4d41-86a0-16cf75e253d0","resourceVersion":"464","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"030d1c05-ca2b-42bc-8181-c0109b2fd192","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"030d1c05-ca2b-42bc-8181-c0109b2fd192\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1005 21:39:59.044074 1518222 request.go:629] Waited for 196.251964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:59.044144 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558-m02
	I1005 21:39:59.044153 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:59.044163 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:59.044170 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:59.046727 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:59.046752 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:59.046761 1518222 round_trippers.go:580]     Audit-Id: 062be56d-d16c-4a55-8de2-9047442bc80c
	I1005 21:39:59.046768 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:59.046774 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:59.046782 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:59.046797 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:59.046831 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:59 GMT
	I1005 21:39:59.046946 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558-m02","uid":"2a3b0d7e-0c2e-489c-92b5-d3340b610449","resourceVersion":"498","creationTimestamp":"2023-10-05T21:39:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:39:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1005 21:39:59.047325 1518222 pod_ready.go:92] pod "kube-proxy-rlvpm" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:59.047341 1518222 pod_ready.go:81] duration metric: took 399.470468ms waiting for pod "kube-proxy-rlvpm" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:59.047353 1518222 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:59.244767 1518222 request.go:629] Waited for 197.327605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814558
	I1005 21:39:59.244825 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814558
	I1005 21:39:59.244836 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:59.244845 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:59.244855 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:59.247480 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:59.247501 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:59.247509 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:59.247516 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:59.247523 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:59 GMT
	I1005 21:39:59.247529 1518222 round_trippers.go:580]     Audit-Id: 6f268db7-6f31-4b6e-92d4-170ffe0b2d93
	I1005 21:39:59.247535 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:59.247542 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:59.247651 1518222 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-814558","namespace":"kube-system","uid":"d161dcc2-6d30-4384-826e-ccbbc539edda","resourceVersion":"283","creationTimestamp":"2023-10-05T21:38:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dcea6c17a12dd03ffc181c343e33d23a","kubernetes.io/config.mirror":"dcea6c17a12dd03ffc181c343e33d23a","kubernetes.io/config.seen":"2023-10-05T21:38:25.080108120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-05T21:38:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1005 21:39:59.444475 1518222 request.go:629] Waited for 196.362487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:59.444537 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814558
	I1005 21:39:59.444547 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:59.444557 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:59.444567 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:59.447301 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:59.447324 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:59.447332 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:59.447339 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:59 GMT
	I1005 21:39:59.447346 1518222 round_trippers.go:580]     Audit-Id: dfe086ec-c59f-402a-87e4-baed50a11bb0
	I1005 21:39:59.447352 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:59.447358 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:59.447368 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:59.447530 1518222 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-05T21:38:21Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1005 21:39:59.447948 1518222 pod_ready.go:92] pod "kube-scheduler-multinode-814558" in "kube-system" namespace has status "Ready":"True"
	I1005 21:39:59.447968 1518222 pod_ready.go:81] duration metric: took 400.60439ms waiting for pod "kube-scheduler-multinode-814558" in "kube-system" namespace to be "Ready" ...
	I1005 21:39:59.447983 1518222 pod_ready.go:38] duration metric: took 1.200849603s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:39:59.448003 1518222 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:39:59.448060 1518222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:39:59.462323 1518222 system_svc.go:56] duration metric: took 14.31031ms WaitForService to wait for kubelet.
	I1005 21:39:59.462350 1518222 kubeadm.go:581] duration metric: took 32.256174846s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:39:59.462372 1518222 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:39:59.644795 1518222 request.go:629] Waited for 182.308717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1005 21:39:59.644855 1518222 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1005 21:39:59.644867 1518222 round_trippers.go:469] Request Headers:
	I1005 21:39:59.644876 1518222 round_trippers.go:473]     Accept: application/json, */*
	I1005 21:39:59.644887 1518222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1005 21:39:59.647615 1518222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1005 21:39:59.647682 1518222 round_trippers.go:577] Response Headers:
	I1005 21:39:59.647755 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b7d11c52-139a-4243-8a3c-97a675bbe401
	I1005 21:39:59.647782 1518222 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b7a0857c-84d2-429b-a8ce-27720066fb81
	I1005 21:39:59.647806 1518222 round_trippers.go:580]     Date: Thu, 05 Oct 2023 21:39:59 GMT
	I1005 21:39:59.647829 1518222 round_trippers.go:580]     Audit-Id: 61dc138a-d184-467c-9d48-ccb74a763365
	I1005 21:39:59.647861 1518222 round_trippers.go:580]     Cache-Control: no-cache, private
	I1005 21:39:59.647876 1518222 round_trippers.go:580]     Content-Type: application/json
	I1005 21:39:59.648055 1518222 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"499"},"items":[{"metadata":{"name":"multinode-814558","uid":"26905e02-371a-4657-b253-bb1a43522530","resourceVersion":"390","creationTimestamp":"2023-10-05T21:38:22Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-814558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"300d55cee86053f5b4c7a654fc8e7b9d3c030d53","minikube.k8s.io/name":"multinode-814558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_05T21_38_26_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1005 21:39:59.648683 1518222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:39:59.648704 1518222 node_conditions.go:123] node cpu capacity is 2
	I1005 21:39:59.648715 1518222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:39:59.648725 1518222 node_conditions.go:123] node cpu capacity is 2
	I1005 21:39:59.648730 1518222 node_conditions.go:105] duration metric: took 186.331142ms to run NodePressure ...
	I1005 21:39:59.648742 1518222 start.go:228] waiting for startup goroutines ...
	I1005 21:39:59.648771 1518222 start.go:242] writing updated cluster config ...
	I1005 21:39:59.649088 1518222 ssh_runner.go:195] Run: rm -f paused
	I1005 21:39:59.712079 1518222 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 21:39:59.715656 1518222 out.go:177] * Done! kubectl is now configured to use "multinode-814558" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 05 21:39:09 multinode-814558 crio[908]: time="2023-10-05 21:39:09.272067284Z" level=info msg="Starting container: a6be139cd918f1cbfb6369220648eae269fc08d5f5122562a84258b56af46c81" id=e873be7c-df5f-4291-bfd2-088966239206 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:39:09 multinode-814558 crio[908]: time="2023-10-05 21:39:09.288397159Z" level=info msg="Created container 92e652b077820ec3c4e3d784d9a8e5fc94255debfc21559ee7affde4f503f009: kube-system/coredns-5dd5756b68-6bvj5/coredns" id=7b612b1e-9639-460a-9f44-fb830955f45d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:39:09 multinode-814558 crio[908]: time="2023-10-05 21:39:09.289447594Z" level=info msg="Started container" PID=1935 containerID=a6be139cd918f1cbfb6369220648eae269fc08d5f5122562a84258b56af46c81 description=kube-system/storage-provisioner/storage-provisioner id=e873be7c-df5f-4291-bfd2-088966239206 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c1cad9ad267146874dac7f3f0fa03e860c976d99a2d3a182b3e798e4107fa0b4
	Oct 05 21:39:09 multinode-814558 crio[908]: time="2023-10-05 21:39:09.289645616Z" level=info msg="Starting container: 92e652b077820ec3c4e3d784d9a8e5fc94255debfc21559ee7affde4f503f009" id=db1abd67-de14-4368-867b-102629e50e1f name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:39:09 multinode-814558 crio[908]: time="2023-10-05 21:39:09.309979995Z" level=info msg="Started container" PID=1958 containerID=92e652b077820ec3c4e3d784d9a8e5fc94255debfc21559ee7affde4f503f009 description=kube-system/coredns-5dd5756b68-6bvj5/coredns id=db1abd67-de14-4368-867b-102629e50e1f name=/runtime.v1.RuntimeService/StartContainer sandboxID=1683c5cf3635a2f83871cd6cde5a0fd63fabcdd07c0c4dfab537bbc24046d777
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.304240807Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-hrkj8/POD" id=2633057a-d8e1-4274-9e99-48da51338206 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.304303601Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.320936080Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-hrkj8 Namespace:default ID:f14a417aa64f29c27516d073eb46fdd85167e1cf5293ef37d394f2297c01a43d UID:dc2eed40-e714-4cd6-85cb-1bc9f7d60258 NetNS:/var/run/netns/0a6680dd-db2c-47d0-b875-61b456bc3a8b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.320979444Z" level=info msg="Adding pod default_busybox-5bc68d56bd-hrkj8 to CNI network \"kindnet\" (type=ptp)"
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.331578223Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-hrkj8 Namespace:default ID:f14a417aa64f29c27516d073eb46fdd85167e1cf5293ef37d394f2297c01a43d UID:dc2eed40-e714-4cd6-85cb-1bc9f7d60258 NetNS:/var/run/netns/0a6680dd-db2c-47d0-b875-61b456bc3a8b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.331751777Z" level=info msg="Checking pod default_busybox-5bc68d56bd-hrkj8 for CNI network kindnet (type=ptp)"
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.337800412Z" level=info msg="Ran pod sandbox f14a417aa64f29c27516d073eb46fdd85167e1cf5293ef37d394f2297c01a43d with infra container: default/busybox-5bc68d56bd-hrkj8/POD" id=2633057a-d8e1-4274-9e99-48da51338206 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.338804438Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fa646729-d9a0-4584-98d1-064767cfa823 name=/runtime.v1.ImageService/ImageStatus
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.339016819Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=fa646729-d9a0-4584-98d1-064767cfa823 name=/runtime.v1.ImageService/ImageStatus
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.339912210Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=6c7e5e5b-c8b2-43ac-b22d-44d3ebced215 name=/runtime.v1.ImageService/PullImage
	Oct 05 21:40:01 multinode-814558 crio[908]: time="2023-10-05 21:40:01.341021869Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 05 21:40:02 multinode-814558 crio[908]: time="2023-10-05 21:40:02.102089677Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.399680970Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=6c7e5e5b-c8b2-43ac-b22d-44d3ebced215 name=/runtime.v1.ImageService/PullImage
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.401239031Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=22551146-dcb2-4705-97a5-374cb823a57f name=/runtime.v1.ImageService/ImageStatus
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.402214314Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=22551146-dcb2-4705-97a5-374cb823a57f name=/runtime.v1.ImageService/ImageStatus
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.403221114Z" level=info msg="Creating container: default/busybox-5bc68d56bd-hrkj8/busybox" id=4a86e8a0-f11d-4888-8c5c-ba40fccae9b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.403309582Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.481881237Z" level=info msg="Created container 6ad6ce4b8b20a66191f0cf91cd5cc97ce2c3ac541c678f442c91aa8d963c6d27: default/busybox-5bc68d56bd-hrkj8/busybox" id=4a86e8a0-f11d-4888-8c5c-ba40fccae9b1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.482756763Z" level=info msg="Starting container: 6ad6ce4b8b20a66191f0cf91cd5cc97ce2c3ac541c678f442c91aa8d963c6d27" id=8804509f-de70-4de2-98d5-7399a50ac5b1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:40:03 multinode-814558 crio[908]: time="2023-10-05 21:40:03.498459480Z" level=info msg="Started container" PID=2095 containerID=6ad6ce4b8b20a66191f0cf91cd5cc97ce2c3ac541c678f442c91aa8d963c6d27 description=default/busybox-5bc68d56bd-hrkj8/busybox id=8804509f-de70-4de2-98d5-7399a50ac5b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f14a417aa64f29c27516d073eb46fdd85167e1cf5293ef37d394f2297c01a43d
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6ad6ce4b8b20a       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   f14a417aa64f2       busybox-5bc68d56bd-hrkj8
	92e652b077820       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      59 seconds ago       Running             coredns                   0                   1683c5cf3635a       coredns-5dd5756b68-6bvj5
	a6be139cd918f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      59 seconds ago       Running             storage-provisioner       0                   c1cad9ad26714       storage-provisioner
	42ca98eabf5f4       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   442a1925beb26       kindnet-q47f5
	809812e498dd7       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                      About a minute ago   Running             kube-proxy                0                   67351487d5384       kube-proxy-lftrk
	e5179ec2a1297       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                      About a minute ago   Running             kube-apiserver            0                   3b3230bad1435       kube-apiserver-multinode-814558
	fc51aeb8b1680       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                      About a minute ago   Running             kube-controller-manager   0                   5155172fa62ee       kube-controller-manager-multinode-814558
	306210438fac8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   17b8c699e6241       etcd-multinode-814558
	c9c9c952d3693       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                      About a minute ago   Running             kube-scheduler            0                   e571273abf731       kube-scheduler-multinode-814558
	
	* 
	* ==> coredns [92e652b077820ec3c4e3d784d9a8e5fc94255debfc21559ee7affde4f503f009] <==
	* [INFO] 10.244.0.3:32970 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000150572s
	[INFO] 10.244.1.2:52664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109104s
	[INFO] 10.244.1.2:38958 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00129626s
	[INFO] 10.244.1.2:54504 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097994s
	[INFO] 10.244.1.2:48416 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075134s
	[INFO] 10.244.1.2:41547 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000907288s
	[INFO] 10.244.1.2:49503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081912s
	[INFO] 10.244.1.2:40782 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071992s
	[INFO] 10.244.1.2:55415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073739s
	[INFO] 10.244.0.3:50754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104123s
	[INFO] 10.244.0.3:38933 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120402s
	[INFO] 10.244.0.3:36304 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065919s
	[INFO] 10.244.0.3:36966 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125899s
	[INFO] 10.244.1.2:40049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120451s
	[INFO] 10.244.1.2:35567 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077751s
	[INFO] 10.244.1.2:44826 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068308s
	[INFO] 10.244.1.2:38167 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006935s
	[INFO] 10.244.0.3:35223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122912s
	[INFO] 10.244.0.3:53048 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130502s
	[INFO] 10.244.0.3:35326 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009321s
	[INFO] 10.244.0.3:52576 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077383s
	[INFO] 10.244.1.2:37575 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122313s
	[INFO] 10.244.1.2:44380 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000067938s
	[INFO] 10.244.1.2:41828 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065272s
	[INFO] 10.244.1.2:35160 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058182s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-814558
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-814558
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=multinode-814558
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_38_26_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:38:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-814558
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:40:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:39:08 +0000   Thu, 05 Oct 2023 21:38:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:39:08 +0000   Thu, 05 Oct 2023 21:38:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:39:08 +0000   Thu, 05 Oct 2023 21:38:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:39:08 +0000   Thu, 05 Oct 2023 21:39:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-814558
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ca235ac1eb7406ca51f77db8c7346eb
	  System UUID:                009b5710-f8c1-42d0-b138-e6bd215f7c40
	  Boot ID:                    619e9679-c801-4966-a4f0-8d68f85af04f
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hrkj8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-6bvj5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     92s
	  kube-system                 etcd-multinode-814558                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         106s
	  kube-system                 kindnet-q47f5                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      92s
	  kube-system                 kube-apiserver-multinode-814558             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-multinode-814558    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-lftrk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-multinode-814558             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 90s   kube-proxy       
	  Normal  Starting                 104s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s  kubelet          Node multinode-814558 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s  kubelet          Node multinode-814558 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s  kubelet          Node multinode-814558 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s   node-controller  Node multinode-814558 event: Registered Node multinode-814558 in Controller
	  Normal  NodeReady                61s   kubelet          Node multinode-814558 status is now: NodeReady
	
	
	Name:               multinode-814558-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-814558-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:39:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-814558-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:40:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:39:57 +0000   Thu, 05 Oct 2023 21:39:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:39:57 +0000   Thu, 05 Oct 2023 21:39:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:39:57 +0000   Thu, 05 Oct 2023 21:39:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:39:57 +0000   Thu, 05 Oct 2023 21:39:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-814558-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 5205fe525eb34fb8b288a80cd590e8f3
	  System UUID:                37ea763f-1f1f-4453-bf15-fb1dd890d2f6
	  Boot ID:                    619e9679-c801-4966-a4f0-8d68f85af04f
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ztvv9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-rwqzr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-rlvpm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  RegisteredNode           43s                node-controller  Node multinode-814558-m02 event: Registered Node multinode-814558-m02 in Controller
	  Normal  NodeHasSufficientMemory  43s (x5 over 45s)  kubelet          Node multinode-814558-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 45s)  kubelet          Node multinode-814558-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 45s)  kubelet          Node multinode-814558-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12s                kubelet          Node multinode-814558-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001109] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000704] FS-Cache: N-cookie c=00000053 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001037] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +0.002754] FS-Cache: Duplicate cookie detected
	[  +0.000682] FS-Cache: O-cookie c=0000004d [p=0000004a fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000005885d3f4
	[  +0.001100] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000054 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000915] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000009c3c0e5e
	[  +0.001020] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +2.998730] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=0000004b [p=0000004a fl=226 nc=0 na=1]
	[  +0.000947] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000003ef1d116
	[  +0.001076] FS-Cache: O-key=[8] '6ed7c90000000000'
	[  +0.000702] FS-Cache: N-cookie c=00000056 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=0000000003824801
	[  +0.001036] FS-Cache: N-key=[8] '6ed7c90000000000'
	[  +0.302950] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000050 [p=0000004a fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=00000000b99a9016
	[  +0.001212] FS-Cache: O-key=[8] '74d7c90000000000'
	[  +0.000807] FS-Cache: N-cookie c=00000057 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000966] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001183] FS-Cache: N-key=[8] '74d7c90000000000'
	
	* 
	* ==> etcd [306210438fac83ad382ebb193c651b0504304e7b1815192117322a0f147f94f2] <==
	* {"level":"info","ts":"2023-10-05T21:38:18.321884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-05T21:38:18.321981Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-05T21:38:18.325609Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-05T21:38:18.325797Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-05T21:38:18.325812Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-05T21:38:18.325989Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-05T21:38:18.326026Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-05T21:38:18.701136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-05T21:38:18.701195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-05T21:38:18.701222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-05T21:38:18.701235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-05T21:38:18.701242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-05T21:38:18.701252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-05T21:38:18.701261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-05T21:38:18.705518Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-814558 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:38:18.705707Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:38:18.705799Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:38:18.707083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:38:18.707146Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:38:18.707538Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:38:18.707559Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-05T21:38:18.707616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:38:18.707684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:38:18.707709Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:38:18.757761Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  21:40:09 up  7:22,  0 users,  load average: 1.36, 1.82, 1.65
	Linux multinode-814558 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [42ca98eabf5f4c41f6ca8f4ce85cecc1b364a39fa586e68f2406df31f92c7094] <==
	* I1005 21:39:08.359853       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 21:39:08.359888       1 main.go:227] handling current node
	I1005 21:39:18.412283       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 21:39:18.412621       1 main.go:227] handling current node
	I1005 21:39:28.424720       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 21:39:28.424747       1 main.go:227] handling current node
	I1005 21:39:28.424758       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 21:39:28.424763       1 main.go:250] Node multinode-814558-m02 has CIDR [10.244.1.0/24] 
	I1005 21:39:28.424918       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1005 21:39:38.439195       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 21:39:38.439398       1 main.go:227] handling current node
	I1005 21:39:38.439441       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 21:39:38.439477       1 main.go:250] Node multinode-814558-m02 has CIDR [10.244.1.0/24] 
	I1005 21:39:48.448571       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 21:39:48.448601       1 main.go:227] handling current node
	I1005 21:39:48.448614       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 21:39:48.448620       1 main.go:250] Node multinode-814558-m02 has CIDR [10.244.1.0/24] 
	I1005 21:39:58.458894       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 21:39:58.458922       1 main.go:227] handling current node
	I1005 21:39:58.458933       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 21:39:58.458939       1 main.go:250] Node multinode-814558-m02 has CIDR [10.244.1.0/24] 
	I1005 21:40:08.469484       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1005 21:40:08.469513       1 main.go:227] handling current node
	I1005 21:40:08.469525       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1005 21:40:08.469531       1 main.go:250] Node multinode-814558-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [e5179ec2a1297987cb6c9fca05717ff809ef4c2adb9643061f05de8f1336b32b] <==
	* I1005 21:38:22.046137       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 21:38:22.046450       1 shared_informer.go:318] Caches are synced for configmaps
	I1005 21:38:22.047921       1 controller.go:624] quota admission added evaluator for: namespaces
	I1005 21:38:22.054969       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1005 21:38:22.071641       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 21:38:22.086403       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1005 21:38:22.086920       1 aggregator.go:166] initial CRD sync complete...
	I1005 21:38:22.086981       1 autoregister_controller.go:141] Starting autoregister controller
	I1005 21:38:22.087016       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1005 21:38:22.087048       1 cache.go:39] Caches are synced for autoregister controller
	I1005 21:38:22.751690       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1005 21:38:22.756648       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1005 21:38:22.756675       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1005 21:38:23.382452       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 21:38:23.430334       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1005 21:38:23.481436       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1005 21:38:23.489248       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1005 21:38:23.490389       1 controller.go:624] quota admission added evaluator for: endpoints
	I1005 21:38:23.495178       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1005 21:38:24.024694       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1005 21:38:24.985213       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1005 21:38:25.012142       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1005 21:38:25.027360       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1005 21:38:37.442040       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1005 21:38:37.650854       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [fc51aeb8b1680219498d38b6385a07134ef6f22bf609508f242bb51cb9e3f85e] <==
	* I1005 21:38:38.789217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.317µs"
	I1005 21:39:08.834971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.278µs"
	I1005 21:39:08.860722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.745µs"
	I1005 21:39:10.337524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.801µs"
	I1005 21:39:10.370178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.582011ms"
	I1005 21:39:10.370364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.643µs"
	I1005 21:39:11.895072       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1005 21:39:26.108502       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-814558-m02\" does not exist"
	I1005 21:39:26.137194       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-814558-m02" podCIDRs=["10.244.1.0/24"]
	I1005 21:39:26.138561       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rlvpm"
	I1005 21:39:26.149110       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwqzr"
	I1005 21:39:26.897312       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-814558-m02"
	I1005 21:39:26.897784       1 event.go:307] "Event occurred" object="multinode-814558-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-814558-m02 event: Registered Node multinode-814558-m02 in Controller"
	I1005 21:39:57.778807       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-814558-m02"
	I1005 21:40:00.949648       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1005 21:40:00.963820       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-ztvv9"
	I1005 21:40:00.987032       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-hrkj8"
	I1005 21:40:01.012689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="69.805956ms"
	I1005 21:40:01.041469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.644169ms"
	I1005 21:40:01.041628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.514µs"
	I1005 21:40:01.913671       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-ztvv9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-ztvv9"
	I1005 21:40:03.784721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.782758ms"
	I1005 21:40:03.785097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="118.366µs"
	I1005 21:40:04.444979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.480989ms"
	I1005 21:40:04.445075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.698µs"
	
	* 
	* ==> kube-proxy [809812e498dd7d529b26f85ebbcff0610fbd6a1d47888dd8faa8fde8b4964276] <==
	* I1005 21:38:38.088731       1 server_others.go:69] "Using iptables proxy"
	I1005 21:38:38.109121       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1005 21:38:38.225295       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:38:38.338065       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:38:38.338212       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:38:38.338244       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:38:38.338340       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:38:38.338622       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:38:38.338687       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:38:38.340423       1 config.go:188] "Starting service config controller"
	I1005 21:38:38.340477       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:38:38.340540       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:38:38.340568       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:38:38.341280       1 config.go:315] "Starting node config controller"
	I1005 21:38:38.341366       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:38:38.441280       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1005 21:38:38.441470       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:38:38.441619       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c9c9c952d36930b8fc66b34f7cbaf5e4b407285fd26af2703ad0e7e780b590f9] <==
	* W1005 21:38:22.067005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1005 21:38:22.067539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1005 21:38:22.067073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 21:38:22.067554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1005 21:38:22.067118       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1005 21:38:22.067569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1005 21:38:22.067154       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:38:22.067586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1005 21:38:22.067210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 21:38:22.067607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1005 21:38:22.067266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 21:38:22.067620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 21:38:22.873679       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1005 21:38:22.873820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1005 21:38:22.994605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 21:38:22.994644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 21:38:23.033511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1005 21:38:23.033548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1005 21:38:23.053183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1005 21:38:23.053323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1005 21:38:23.136713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 21:38:23.136756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1005 21:38:23.398203       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:38:23.398248       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1005 21:38:26.242877       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.464945    1398 topology_manager.go:215] "Topology Admit Handler" podUID="00a86d93-f9f8-4616-9b0d-639530776c04" podNamespace="kube-system" podName="kube-proxy-lftrk"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.484223    1398 topology_manager.go:215] "Topology Admit Handler" podUID="4022c47f-9cbd-4500-a2aa-92e0caaedf99" podNamespace="kube-system" podName="kindnet-q47f5"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570435    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvczf\" (UniqueName: \"kubernetes.io/projected/4022c47f-9cbd-4500-a2aa-92e0caaedf99-kube-api-access-fvczf\") pod \"kindnet-q47f5\" (UID: \"4022c47f-9cbd-4500-a2aa-92e0caaedf99\") " pod="kube-system/kindnet-q47f5"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570498    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00a86d93-f9f8-4616-9b0d-639530776c04-lib-modules\") pod \"kube-proxy-lftrk\" (UID: \"00a86d93-f9f8-4616-9b0d-639530776c04\") " pod="kube-system/kube-proxy-lftrk"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570524    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4022c47f-9cbd-4500-a2aa-92e0caaedf99-lib-modules\") pod \"kindnet-q47f5\" (UID: \"4022c47f-9cbd-4500-a2aa-92e0caaedf99\") " pod="kube-system/kindnet-q47f5"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570547    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00a86d93-f9f8-4616-9b0d-639530776c04-xtables-lock\") pod \"kube-proxy-lftrk\" (UID: \"00a86d93-f9f8-4616-9b0d-639530776c04\") " pod="kube-system/kube-proxy-lftrk"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570573    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4022c47f-9cbd-4500-a2aa-92e0caaedf99-cni-cfg\") pod \"kindnet-q47f5\" (UID: \"4022c47f-9cbd-4500-a2aa-92e0caaedf99\") " pod="kube-system/kindnet-q47f5"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570601    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrzm5\" (UniqueName: \"kubernetes.io/projected/00a86d93-f9f8-4616-9b0d-639530776c04-kube-api-access-qrzm5\") pod \"kube-proxy-lftrk\" (UID: \"00a86d93-f9f8-4616-9b0d-639530776c04\") " pod="kube-system/kube-proxy-lftrk"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570624    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4022c47f-9cbd-4500-a2aa-92e0caaedf99-xtables-lock\") pod \"kindnet-q47f5\" (UID: \"4022c47f-9cbd-4500-a2aa-92e0caaedf99\") " pod="kube-system/kindnet-q47f5"
	Oct 05 21:38:37 multinode-814558 kubelet[1398]: I1005 21:38:37.570651    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00a86d93-f9f8-4616-9b0d-639530776c04-kube-proxy\") pod \"kube-proxy-lftrk\" (UID: \"00a86d93-f9f8-4616-9b0d-639530776c04\") " pod="kube-system/kube-proxy-lftrk"
	Oct 05 21:38:38 multinode-814558 kubelet[1398]: I1005 21:38:38.405657    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lftrk" podStartSLOduration=1.405616142 podCreationTimestamp="2023-10-05 21:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 21:38:38.404947146 +0000 UTC m=+13.454177606" watchObservedRunningTime="2023-10-05 21:38:38.405616142 +0000 UTC m=+13.454846594"
	Oct 05 21:38:38 multinode-814558 kubelet[1398]: I1005 21:38:38.405756    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-q47f5" podStartSLOduration=1.405736978 podCreationTimestamp="2023-10-05 21:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 21:38:38.346531164 +0000 UTC m=+13.395761624" watchObservedRunningTime="2023-10-05 21:38:38.405736978 +0000 UTC m=+13.454967447"
	Oct 05 21:39:08 multinode-814558 kubelet[1398]: I1005 21:39:08.798081    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 05 21:39:08 multinode-814558 kubelet[1398]: I1005 21:39:08.830332    1398 topology_manager.go:215] "Topology Admit Handler" podUID="ddcc6c9f-5045-4c99-9808-25700d745ce0" podNamespace="kube-system" podName="storage-provisioner"
	Oct 05 21:39:08 multinode-814558 kubelet[1398]: I1005 21:39:08.832171    1398 topology_manager.go:215] "Topology Admit Handler" podUID="c0961e1d-4075-4c8e-94d9-9c34564f71df" podNamespace="kube-system" podName="coredns-5dd5756b68-6bvj5"
	Oct 05 21:39:08 multinode-814558 kubelet[1398]: I1005 21:39:08.910647    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ddcc6c9f-5045-4c99-9808-25700d745ce0-tmp\") pod \"storage-provisioner\" (UID: \"ddcc6c9f-5045-4c99-9808-25700d745ce0\") " pod="kube-system/storage-provisioner"
	Oct 05 21:39:08 multinode-814558 kubelet[1398]: I1005 21:39:08.910725    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wmnw\" (UniqueName: \"kubernetes.io/projected/ddcc6c9f-5045-4c99-9808-25700d745ce0-kube-api-access-7wmnw\") pod \"storage-provisioner\" (UID: \"ddcc6c9f-5045-4c99-9808-25700d745ce0\") " pod="kube-system/storage-provisioner"
	Oct 05 21:39:08 multinode-814558 kubelet[1398]: I1005 21:39:08.910756    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0961e1d-4075-4c8e-94d9-9c34564f71df-config-volume\") pod \"coredns-5dd5756b68-6bvj5\" (UID: \"c0961e1d-4075-4c8e-94d9-9c34564f71df\") " pod="kube-system/coredns-5dd5756b68-6bvj5"
	Oct 05 21:39:08 multinode-814558 kubelet[1398]: I1005 21:39:08.910782    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9t2\" (UniqueName: \"kubernetes.io/projected/c0961e1d-4075-4c8e-94d9-9c34564f71df-kube-api-access-6s9t2\") pod \"coredns-5dd5756b68-6bvj5\" (UID: \"c0961e1d-4075-4c8e-94d9-9c34564f71df\") " pod="kube-system/coredns-5dd5756b68-6bvj5"
	Oct 05 21:39:09 multinode-814558 kubelet[1398]: W1005 21:39:09.203393    1398 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/crio-1683c5cf3635a2f83871cd6cde5a0fd63fabcdd07c0c4dfab537bbc24046d777 WatchSource:0}: Error finding container 1683c5cf3635a2f83871cd6cde5a0fd63fabcdd07c0c4dfab537bbc24046d777: Status 404 returned error can't find the container with id 1683c5cf3635a2f83871cd6cde5a0fd63fabcdd07c0c4dfab537bbc24046d777
	Oct 05 21:39:10 multinode-814558 kubelet[1398]: I1005 21:39:10.336469    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.336423549 podCreationTimestamp="2023-10-05 21:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 21:39:09.348445311 +0000 UTC m=+44.397675762" watchObservedRunningTime="2023-10-05 21:39:10.336423549 +0000 UTC m=+45.385654000"
	Oct 05 21:39:10 multinode-814558 kubelet[1398]: I1005 21:39:10.356730    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6bvj5" podStartSLOduration=33.35668637 podCreationTimestamp="2023-10-05 21:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-05 21:39:10.336946058 +0000 UTC m=+45.386176518" watchObservedRunningTime="2023-10-05 21:39:10.35668637 +0000 UTC m=+45.405916822"
	Oct 05 21:40:01 multinode-814558 kubelet[1398]: I1005 21:40:01.002345    1398 topology_manager.go:215] "Topology Admit Handler" podUID="dc2eed40-e714-4cd6-85cb-1bc9f7d60258" podNamespace="default" podName="busybox-5bc68d56bd-hrkj8"
	Oct 05 21:40:01 multinode-814558 kubelet[1398]: I1005 21:40:01.102847    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbgxn\" (UniqueName: \"kubernetes.io/projected/dc2eed40-e714-4cd6-85cb-1bc9f7d60258-kube-api-access-gbgxn\") pod \"busybox-5bc68d56bd-hrkj8\" (UID: \"dc2eed40-e714-4cd6-85cb-1bc9f7d60258\") " pod="default/busybox-5bc68d56bd-hrkj8"
	Oct 05 21:40:01 multinode-814558 kubelet[1398]: W1005 21:40:01.336129    1398 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/crio-f14a417aa64f29c27516d073eb46fdd85167e1cf5293ef37d394f2297c01a43d WatchSource:0}: Error finding container f14a417aa64f29c27516d073eb46fdd85167e1cf5293ef37d394f2297c01a43d: Status 404 returned error can't find the container with id f14a417aa64f29c27516d073eb46fdd85167e1cf5293ef37d394f2297c01a43d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-814558 -n multinode-814558
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-814558 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.2000135262.exe start -p running-upgrade-208915 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.2000135262.exe start -p running-upgrade-208915 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m11.283102806s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-208915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-208915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.026593816s)

                                                
                                                
-- stdout --
	* [running-upgrade-208915] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-208915 in cluster running-upgrade-208915
	* Pulling base image ...
	* Updating the running docker "running-upgrade-208915" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 22:00:59.839408 1595785 out.go:296] Setting OutFile to fd 1 ...
	I1005 22:00:59.839691 1595785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 22:00:59.839720 1595785 out.go:309] Setting ErrFile to fd 2...
	I1005 22:00:59.839742 1595785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 22:00:59.840067 1595785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 22:00:59.840488 1595785 out.go:303] Setting JSON to false
	I1005 22:00:59.841891 1595785 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27807,"bootTime":1696515453,"procs":485,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 22:00:59.842007 1595785 start.go:138] virtualization:  
	I1005 22:00:59.844488 1595785 out.go:177] * [running-upgrade-208915] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 22:00:59.847166 1595785 notify.go:220] Checking for updates...
	I1005 22:00:59.848503 1595785 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 22:00:59.851087 1595785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 22:00:59.852775 1595785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 22:00:59.855312 1595785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 22:00:59.856960 1595785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 22:00:59.858511 1595785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 22:00:59.861120 1595785 config.go:182] Loaded profile config "running-upgrade-208915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1005 22:00:59.863628 1595785 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1005 22:00:59.865325 1595785 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 22:00:59.890758 1595785 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 22:00:59.891019 1595785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 22:00:59.995892 1595785 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-05 22:00:59.986210286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 22:00:59.996002 1595785 docker.go:294] overlay module found
	I1005 22:01:00.035403 1595785 out.go:177] * Using the docker driver based on existing profile
	I1005 22:01:00.059897 1595785 start.go:298] selected driver: docker
	I1005 22:01:00.059924 1595785 start.go:902] validating driver "docker" against &{Name:running-upgrade-208915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-208915 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.20 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 22:01:00.060237 1595785 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 22:01:00.061094 1595785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 22:01:00.290496 1595785 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-05 22:01:00.255736579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 22:01:00.291240 1595785 cni.go:84] Creating CNI manager for ""
	I1005 22:01:00.291263 1595785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 22:01:00.291274 1595785 start_flags.go:321] config:
	{Name:running-upgrade-208915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-208915 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.20 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 22:01:00.295210 1595785 out.go:177] * Starting control plane node running-upgrade-208915 in cluster running-upgrade-208915
	I1005 22:01:00.298209 1595785 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 22:01:00.300641 1595785 out.go:177] * Pulling base image ...
	I1005 22:01:00.303420 1595785 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1005 22:01:00.303484 1595785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1005 22:01:00.354137 1595785 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1005 22:01:00.354169 1595785 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1005 22:01:00.372462 1595785 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1005 22:01:00.372638 1595785 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/running-upgrade-208915/config.json ...
	I1005 22:01:00.372951 1595785 cache.go:195] Successfully downloaded all kic artifacts
	I1005 22:01:00.373009 1595785 start.go:365] acquiring machines lock for running-upgrade-208915: {Name:mke440f1b5b955dd8b3fd4e096a8bb6487d61ca5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.373111 1595785 start.go:369] acquired machines lock for "running-upgrade-208915" in 51.873µs
	I1005 22:01:00.373131 1595785 start.go:96] Skipping create...Using existing machine configuration
	I1005 22:01:00.373136 1595785 fix.go:54] fixHost starting: 
	I1005 22:01:00.373491 1595785 cli_runner.go:164] Run: docker container inspect running-upgrade-208915 --format={{.State.Status}}
	I1005 22:01:00.373776 1595785 cache.go:107] acquiring lock: {Name:mk0fa157403c63492b15d5a0a2c52e3e839b3715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.373917 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1005 22:01:00.373933 1595785 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 162.1µs
	I1005 22:01:00.374001 1595785 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1005 22:01:00.374012 1595785 cache.go:107] acquiring lock: {Name:mkc964c082ca26bad021be16f7f923ee9f32a81f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.374060 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1005 22:01:00.374071 1595785 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 60.808µs
	I1005 22:01:00.374079 1595785 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1005 22:01:00.374090 1595785 cache.go:107] acquiring lock: {Name:mk99ca885724680dd8693e2447f1b981a4c49dc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.374126 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1005 22:01:00.374136 1595785 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 49.813µs
	I1005 22:01:00.374144 1595785 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1005 22:01:00.374156 1595785 cache.go:107] acquiring lock: {Name:mk28b497a6e64cfaf2b6ba1eb8f742cd400e4cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.374186 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1005 22:01:00.374196 1595785 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 41.723µs
	I1005 22:01:00.374204 1595785 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1005 22:01:00.374214 1595785 cache.go:107] acquiring lock: {Name:mkb5763da8bac8f2e59684959ba9a85485218251 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.374246 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1005 22:01:00.374256 1595785 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 41.822µs
	I1005 22:01:00.374263 1595785 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1005 22:01:00.374272 1595785 cache.go:107] acquiring lock: {Name:mkfc6f61869687d6e82a0036c1cd3dc3327f61cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.374306 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1005 22:01:00.374316 1595785 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 44.373µs
	I1005 22:01:00.374322 1595785 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1005 22:01:00.374332 1595785 cache.go:107] acquiring lock: {Name:mkbc7108f01f8e966d83756b2e5d6cef66841b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.374362 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1005 22:01:00.374373 1595785 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 40.911µs
	I1005 22:01:00.374380 1595785 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1005 22:01:00.374389 1595785 cache.go:107] acquiring lock: {Name:mk1d6f6052102b5b4c1a02f29e9d7ee38e5131c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:01:00.374420 1595785 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1005 22:01:00.374428 1595785 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 39.573µs
	I1005 22:01:00.374435 1595785 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1005 22:01:00.374443 1595785 cache.go:87] Successfully saved all images to host disk.
	I1005 22:01:00.400033 1595785 fix.go:102] recreateIfNeeded on running-upgrade-208915: state=Running err=<nil>
	W1005 22:01:00.400077 1595785 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 22:01:00.402427 1595785 out.go:177] * Updating the running docker "running-upgrade-208915" container ...
	I1005 22:01:00.404836 1595785 machine.go:88] provisioning docker machine ...
	I1005 22:01:00.404905 1595785 ubuntu.go:169] provisioning hostname "running-upgrade-208915"
	I1005 22:01:00.404994 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:00.434612 1595785 main.go:141] libmachine: Using SSH client type: native
	I1005 22:01:00.435357 1595785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
	I1005 22:01:00.435379 1595785 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-208915 && echo "running-upgrade-208915" | sudo tee /etc/hostname
	I1005 22:01:00.613819 1595785 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-208915
	
	I1005 22:01:00.613926 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:00.634573 1595785 main.go:141] libmachine: Using SSH client type: native
	I1005 22:01:00.634978 1595785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
	I1005 22:01:00.635003 1595785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-208915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-208915/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-208915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 22:01:00.778652 1595785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 22:01:00.778680 1595785 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 22:01:00.778705 1595785 ubuntu.go:177] setting up certificates
	I1005 22:01:00.778718 1595785 provision.go:83] configureAuth start
	I1005 22:01:00.778786 1595785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-208915
	I1005 22:01:00.799785 1595785 provision.go:138] copyHostCerts
	I1005 22:01:00.799858 1595785 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 22:01:00.799872 1595785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 22:01:00.799953 1595785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 22:01:00.800065 1595785 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 22:01:00.800076 1595785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 22:01:00.800106 1595785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 22:01:00.800164 1595785 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 22:01:00.800174 1595785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 22:01:00.800199 1595785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 22:01:00.800249 1595785 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-208915 san=[192.168.59.20 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-208915]
	I1005 22:01:01.478842 1595785 provision.go:172] copyRemoteCerts
	I1005 22:01:01.478913 1595785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 22:01:01.478960 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:01.497741 1595785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/running-upgrade-208915/id_rsa Username:docker}
	I1005 22:01:01.599657 1595785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 22:01:01.626744 1595785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1005 22:01:01.656228 1595785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 22:01:01.685247 1595785 provision.go:86] duration metric: configureAuth took 906.513405ms
	I1005 22:01:01.685273 1595785 ubuntu.go:193] setting minikube options for container-runtime
	I1005 22:01:01.685509 1595785 config.go:182] Loaded profile config "running-upgrade-208915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1005 22:01:01.685627 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:01.708310 1595785 main.go:141] libmachine: Using SSH client type: native
	I1005 22:01:01.708809 1595785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34280 <nil> <nil>}
	I1005 22:01:01.708830 1595785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 22:01:02.305496 1595785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 22:01:02.305555 1595785 machine.go:91] provisioned docker machine in 1.90065498s
	I1005 22:01:02.305570 1595785 start.go:300] post-start starting for "running-upgrade-208915" (driver="docker")
	I1005 22:01:02.305581 1595785 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 22:01:02.305653 1595785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 22:01:02.305699 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:02.325621 1595785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/running-upgrade-208915/id_rsa Username:docker}
	I1005 22:01:02.427615 1595785 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 22:01:02.431907 1595785 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 22:01:02.431935 1595785 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 22:01:02.431946 1595785 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 22:01:02.431954 1595785 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1005 22:01:02.431964 1595785 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 22:01:02.432026 1595785 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 22:01:02.432109 1595785 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 22:01:02.432216 1595785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 22:01:02.441824 1595785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 22:01:02.472594 1595785 start.go:303] post-start completed in 167.002745ms
	I1005 22:01:02.472725 1595785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 22:01:02.472775 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:02.498620 1595785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/running-upgrade-208915/id_rsa Username:docker}
	I1005 22:01:02.599872 1595785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 22:01:02.605816 1595785 fix.go:56] fixHost completed within 2.232669744s
	I1005 22:01:02.605842 1595785 start.go:83] releasing machines lock for "running-upgrade-208915", held for 2.232717999s
	I1005 22:01:02.605924 1595785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-208915
	I1005 22:01:02.625079 1595785 ssh_runner.go:195] Run: cat /version.json
	I1005 22:01:02.625133 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:02.625213 1595785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 22:01:02.625379 1595785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-208915
	I1005 22:01:02.646722 1595785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/running-upgrade-208915/id_rsa Username:docker}
	I1005 22:01:02.653485 1595785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34280 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/running-upgrade-208915/id_rsa Username:docker}
	W1005 22:01:02.745840 1595785 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1005 22:01:02.745929 1595785 ssh_runner.go:195] Run: systemctl --version
	I1005 22:01:02.827345 1595785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 22:01:02.966860 1595785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 22:01:02.976134 1595785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 22:01:03.005892 1595785 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 22:01:03.006131 1595785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 22:01:03.050528 1595785 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1005 22:01:03.050553 1595785 start.go:469] detecting cgroup driver to use...
	I1005 22:01:03.050616 1595785 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 22:01:03.050695 1595785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 22:01:03.087463 1595785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 22:01:03.103651 1595785 docker.go:197] disabling cri-docker service (if available) ...
	I1005 22:01:03.103756 1595785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 22:01:03.118122 1595785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 22:01:03.133033 1595785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1005 22:01:03.148599 1595785 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1005 22:01:03.148670 1595785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 22:01:03.394686 1595785 docker.go:213] disabling docker service ...
	I1005 22:01:03.394755 1595785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 22:01:03.411506 1595785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 22:01:03.425062 1595785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 22:01:03.589734 1595785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 22:01:03.740410 1595785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 22:01:03.754012 1595785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 22:01:03.779592 1595785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1005 22:01:03.779670 1595785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 22:01:03.794661 1595785 out.go:177] 
	W1005 22:01:03.797060 1595785 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1005 22:01:03.797088 1595785 out.go:239] * 
	* 
	W1005 22:01:03.798059 1595785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 22:01:03.800697 1595785 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-208915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-05 22:01:03.837351278 +0000 UTC m=+2775.093286382
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-208915
helpers_test.go:235: (dbg) docker inspect running-upgrade-208915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37916a4cb8bcab45f11ecbb7680fb33e25312c424eb67ef305534d03ce6c469b",
	        "Created": "2023-10-05T22:00:08.201765495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1591825,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T22:00:08.787404879Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/37916a4cb8bcab45f11ecbb7680fb33e25312c424eb67ef305534d03ce6c469b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37916a4cb8bcab45f11ecbb7680fb33e25312c424eb67ef305534d03ce6c469b/hostname",
	        "HostsPath": "/var/lib/docker/containers/37916a4cb8bcab45f11ecbb7680fb33e25312c424eb67ef305534d03ce6c469b/hosts",
	        "LogPath": "/var/lib/docker/containers/37916a4cb8bcab45f11ecbb7680fb33e25312c424eb67ef305534d03ce6c469b/37916a4cb8bcab45f11ecbb7680fb33e25312c424eb67ef305534d03ce6c469b-json.log",
	        "Name": "/running-upgrade-208915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-208915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-208915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/58b12051cdc0730046040a51293b715a76f3e97139c7d50072df1e0d7b286381-init/diff:/var/lib/docker/overlay2/70a2eef69f5924b2e4e9e9526112a2b952b2b35be17f77adf6aa0b9a3dd7e22a/diff:/var/lib/docker/overlay2/af9d36c6a4bd0a3924783bac935ec107d3095f2b519310846ade5251518c98ef/diff:/var/lib/docker/overlay2/979cdc1cf91bceced0f85e99b898db1dae6a9b6762b30b95b10addaf6423c536/diff:/var/lib/docker/overlay2/d070351e27713aac4fab08c1c44eabca70ff7f505a664a27fcf91d5aea09f7ed/diff:/var/lib/docker/overlay2/5cd4d36b70785544241bb24415e3cbd7dd52f8e7c338c53bd8d635df4e3da9cc/diff:/var/lib/docker/overlay2/dbeedb846e538238987e33e03bdb05a41f3b9a0170c00cb6b6561729a08d135b/diff:/var/lib/docker/overlay2/6fa1369a8cd12d1c050e638c268f39ed0448c71148d3f3b218a3c01102946a56/diff:/var/lib/docker/overlay2/a06551f2c598928cc271781048fca53c19d3c4cb664fd21c45025e2125060d91/diff:/var/lib/docker/overlay2/0280acd03f69641fccd4ea3aed3751d626843af80c1a57fb3494611fa7bf0b1f/diff:/var/lib/docker/overlay2/6c6c95
eaee379ce8a28809a4868a87066777afc43021c3641c81ae5400777214/diff:/var/lib/docker/overlay2/d9dcc7bbe3f607dbb59c83681732ccc22dc0182c42eb1a9608868c49c440e40c/diff:/var/lib/docker/overlay2/743df4f08051865b005f435b7fc4fb8146d74a2c5ea247332bc6652b9520d402/diff:/var/lib/docker/overlay2/e0d772ba6f56854125b0a4210e4218b61d186c6d48a8e487010b6183036ede43/diff:/var/lib/docker/overlay2/76e3339af70d89020f271a12ac8b5e92a633b28d409ee03658d5af7ae7666e3a/diff:/var/lib/docker/overlay2/6727005e6b558b2c3e69204bf5a0afa200365762e6fb8e2d02631331229f4e35/diff:/var/lib/docker/overlay2/e5f7756a5067fb5587a4a754af85a6855e1117348adf0373404b45e1d290797b/diff:/var/lib/docker/overlay2/d4782b310e72c39eb6a286f94e02d78587fe512772539a3834eceb1ac60dcf6e/diff:/var/lib/docker/overlay2/9e3a170376b712ab58ba869eb0dfbd3033c4b605601569ddd004bde130c5f2e7/diff:/var/lib/docker/overlay2/833421aad7959eccdbe7301ab45d4fdea6dd74718a3a353bc462712c98729c1b/diff:/var/lib/docker/overlay2/cc7131cc56da7a1c8b12decd318dc462b887129b9258075bb3166352fb45c7c8/diff:/var/lib/d
ocker/overlay2/267be2bb5a0feae2b3e06782b96d9c1316acbd6688b3d1b70b68ffeb5f6e9491/diff:/var/lib/docker/overlay2/b10c2345cdd924c1ea1e7210a6081cf9b11b74fbc100118c52a2e9c9a297b78f/diff:/var/lib/docker/overlay2/c4fd8708e543148b3b00223dedbe48c51e759d3f18f7782eb728b8e076bc51fb/diff:/var/lib/docker/overlay2/88babb18f95f26f6cb8b5a9865dbe9d27d2e8c122abbba737d8b43f61d9ed598/diff:/var/lib/docker/overlay2/a179201e885c6c10822bb20098f0929895d2ca26d204e868dcddb487958c3a53/diff:/var/lib/docker/overlay2/dd9579dd0776a98893059e040e23b363f47f1d4d0e4af63ad0844d849fc2e252/diff:/var/lib/docker/overlay2/980075aae651fa08cda551cc5f311485ad3f16222ac4a69744f4ffe535599a0d/diff:/var/lib/docker/overlay2/33bb250c1369683bbbc9551ee7de3c21495849548e65ece67451f9387c15cb63/diff:/var/lib/docker/overlay2/63818b72506020223fbb1e0cc69e043fec29fa4c86efc6fd9bec566aec176e69/diff:/var/lib/docker/overlay2/9bc34992e925b1f18a50cb2b274007f1574b0fa9ff70618e1e2a34c18d069a9f/diff:/var/lib/docker/overlay2/3fbb11e873f71a92aeba423f4a77ac9cde9b3c5afbdcc90a161f308f3d6
e4277/diff:/var/lib/docker/overlay2/6a6d5048aedae7ff0f26ac31da1ec54e5be390bc91ac62f4aa6b86d24eae50b9/diff:/var/lib/docker/overlay2/f8881ed1f2fe571d2cab86c32e750d36d8c4040b31be6d25fc86f4adbe003247/diff:/var/lib/docker/overlay2/1333b3a368087330c0ead2fc9ad4431509b0e9566c2f46d92314db4a760541a0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58b12051cdc0730046040a51293b715a76f3e97139c7d50072df1e0d7b286381/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58b12051cdc0730046040a51293b715a76f3e97139c7d50072df1e0d7b286381/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58b12051cdc0730046040a51293b715a76f3e97139c7d50072df1e0d7b286381/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-208915",
	                "Source": "/var/lib/docker/volumes/running-upgrade-208915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-208915",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-208915",
	                "name.minikube.sigs.k8s.io": "running-upgrade-208915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d4fed6ad9b6c0ba80c85c0a254a4ad920435d795bc0c8d9ff8e9c825c4f3f27",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34280"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34279"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34278"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34277"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1d4fed6ad9b6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-208915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.20"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "37916a4cb8bc",
	                        "running-upgrade-208915"
	                    ],
	                    "NetworkID": "4e284004c62c98c505440f1f1584fa8279763eb2b06198a8d1df1a15d9a9e615",
	                    "EndpointID": "e09a541f38e20a820af0df20d08b843a4584089f17b30b27b6dedf074275c352",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:14",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-208915 -n running-upgrade-208915
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-208915 -n running-upgrade-208915: exit status 4 (657.250889ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 22:01:04.391448 1596363 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-208915" does not appear in /home/jenkins/minikube-integration/17363-1448442/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-208915" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-208915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-208915
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-208915: (2.634224567s)
--- FAIL: TestRunningBinaryUpgrade (79.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.4221481255.exe start -p missing-upgrade-141603 --memory=2200 --driver=docker  --container-runtime=crio
E1005 21:57:54.598421 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:58:37.312142 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.4221481255.exe start -p missing-upgrade-141603 --memory=2200 --driver=docker  --container-runtime=crio: (1m30.170494482s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-141603
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-141603: (1.695748982s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-141603
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-141603 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-141603 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (34.446399991s)

                                                
                                                
-- stdout --
	* [missing-upgrade-141603] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-141603 in cluster missing-upgrade-141603
	* Pulling base image ...
	* docker "missing-upgrade-141603" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:59:10.207786 1587480 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:59:10.207992 1587480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:59:10.208000 1587480 out.go:309] Setting ErrFile to fd 2...
	I1005 21:59:10.208005 1587480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:59:10.208286 1587480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:59:10.209604 1587480 out.go:303] Setting JSON to false
	I1005 21:59:10.211617 1587480 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27698,"bootTime":1696515453,"procs":459,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:59:10.211711 1587480 start.go:138] virtualization:  
	I1005 21:59:10.216205 1587480 out.go:177] * [missing-upgrade-141603] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:59:10.221133 1587480 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:59:10.223081 1587480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:59:10.224923 1587480 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:59:10.223240 1587480 notify.go:220] Checking for updates...
	I1005 21:59:10.222222 1587480 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1005 21:59:10.227051 1587480 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:59:10.229471 1587480 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:59:10.231246 1587480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:59:10.234183 1587480 config.go:182] Loaded profile config "missing-upgrade-141603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1005 21:59:10.239883 1587480 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1005 21:59:10.242435 1587480 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:59:10.278782 1587480 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:59:10.279889 1587480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:59:10.305717 1587480 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1005 21:59:10.384444 1587480 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:59:10.371811856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:59:10.384566 1587480 docker.go:294] overlay module found
	I1005 21:59:10.387372 1587480 out.go:177] * Using the docker driver based on existing profile
	I1005 21:59:10.389097 1587480 start.go:298] selected driver: docker
	I1005 21:59:10.389115 1587480 start.go:902] validating driver "docker" against &{Name:missing-upgrade-141603 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-141603 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.36 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 21:59:10.389230 1587480 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:59:10.389991 1587480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:59:10.472282 1587480 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:59:10.462257521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:59:10.473949 1587480 cni.go:84] Creating CNI manager for ""
	I1005 21:59:10.473985 1587480 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:59:10.474005 1587480 start_flags.go:321] config:
	{Name:missing-upgrade-141603 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-141603 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.36 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 21:59:10.476049 1587480 out.go:177] * Starting control plane node missing-upgrade-141603 in cluster missing-upgrade-141603
	I1005 21:59:10.477593 1587480 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:59:10.479392 1587480 out.go:177] * Pulling base image ...
	I1005 21:59:10.481186 1587480 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1005 21:59:10.481279 1587480 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1005 21:59:10.504030 1587480 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1005 21:59:10.504210 1587480 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1005 21:59:10.504702 1587480 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1005 21:59:10.581711 1587480 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1005 21:59:10.581991 1587480 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/missing-upgrade-141603/config.json ...
	I1005 21:59:10.582829 1587480 cache.go:107] acquiring lock: {Name:mkb5763da8bac8f2e59684959ba9a85485218251 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.582875 1587480 cache.go:107] acquiring lock: {Name:mkfc6f61869687d6e82a0036c1cd3dc3327f61cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.583004 1587480 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1005 21:59:10.583021 1587480 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1005 21:59:10.584586 1587480 cache.go:107] acquiring lock: {Name:mkc964c082ca26bad021be16f7f923ee9f32a81f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.584694 1587480 cache.go:107] acquiring lock: {Name:mkbc7108f01f8e966d83756b2e5d6cef66841b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.584725 1587480 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1005 21:59:10.584854 1587480 cache.go:107] acquiring lock: {Name:mk99ca885724680dd8693e2447f1b981a4c49dc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.584883 1587480 cache.go:107] acquiring lock: {Name:mk28b497a6e64cfaf2b6ba1eb8f742cd400e4cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.584982 1587480 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1005 21:59:10.584988 1587480 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1005 21:59:10.582838 1587480 cache.go:107] acquiring lock: {Name:mk0fa157403c63492b15d5a0a2c52e3e839b3715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.585106 1587480 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1005 21:59:10.585154 1587480 cache.go:107] acquiring lock: {Name:mk1d6f6052102b5b4c1a02f29e9d7ee38e5131c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:10.585116 1587480 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.315434ms
	I1005 21:59:10.585237 1587480 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1005 21:59:10.585264 1587480 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1005 21:59:10.585419 1587480 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1005 21:59:10.586745 1587480 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1005 21:59:10.587344 1587480 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1005 21:59:10.587589 1587480 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1005 21:59:10.587827 1587480 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1005 21:59:10.587919 1587480 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1005 21:59:10.588250 1587480 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1005 21:59:10.588540 1587480 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W1005 21:59:11.028101 1587480 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1005 21:59:11.028233 1587480 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1005 21:59:11.041610 1587480 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1005 21:59:11.079335 1587480 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1005 21:59:11.079897 1587480 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1005 21:59:11.092570 1587480 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1005 21:59:11.092659 1587480 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1005 21:59:11.105790 1587480 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1005 21:59:11.109257 1587480 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1005 21:59:11.117450 1587480 cache.go:162] opening:  /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1005 21:59:11.173183 1587480 cache.go:157] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1005 21:59:11.173210 1587480 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 590.335054ms
	I1005 21:59:11.173223 1587480 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  1.69 MiB / 287.99 MiB [>_] 0.59% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  17.75 MiB / 287.99 MiB [>] 6.16% ? p/s ?I1005 21:59:11.615188 1587480 cache.go:157] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1005 21:59:11.615220 1587480 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.030068339s
	I1005 21:59:11.615233 1587480 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1005 21:59:11.771250 1587480 cache.go:157] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1005 21:59:11.771278 1587480 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.18639371s
	I1005 21:59:11.771292 1587480 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.20 MiB     > gcr.io/k8s-minikube/kicbase...:  28.66 MiB / 287.99 MiB  9.95% 40.71 MiB I1005 21:59:12.426083 1587480 cache.go:157] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1005 21:59:12.426158 1587480 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.841303848s
	I1005 21:59:12.426216 1587480 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1005 21:59:12.443546 1587480 cache.go:157] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1005 21:59:12.443576 1587480 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.859015425s
	I1005 21:59:12.443590 1587480 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 40.71 MiB    > gcr.io/k8s-minikube/kicbase...:  48.79 MiB / 287.99 MiB  16.94% 40.71 MiB    > gcr.io/k8s-minikube/kicbase...:  66.96 MiB / 287.99 MiB  23.25% 42.20 MiB    > gcr.io/k8s-minikube/kicbase...:  68.56 MiB / 287.99 MiB  23.81% 42.20 MiB    > gcr.io/k8s-minikube/kicbase...:  85.18 MiB / 287.99 MiB  29.58% 42.20 MiB    > gcr.io/k8s-minikube/kicbase...:  101.69 MiB / 287.99 MiB  35.31% 43.21 MiI1005 21:59:13.748248 1587480 cache.go:157] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1005 21:59:13.748282 1587480 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 3.165460389s
	I1005 21:59:13.748296 1587480 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  113.89 MiB / 287.99 MiB  39.54% 43.21 Mi    > gcr.io/k8s-minikube/kicbase...:  134.72 MiB / 287.99 MiB  46.78% 43.21 Mi    > gcr.io/k8s-minikube/kicbase...:  154.04 MiB / 287.99 MiB  53.49% 46.05 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 46.05 Mi    > gcr.io/k8s-minikube/kicbase...:  178.58 MiB / 287.99 MiB  62.01% 46.05 Mi    > gcr.io/k8s-minikube/kicbase...:  197.40 MiB / 287.99 MiB  68.55% 47.75 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 47.75 Mi    > gcr.io/k8s-minikube/kicbase...:  223.43 MiB / 287.99 MiB  77.58% 47.75 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 49.03 Mi    > gcr.io/k8s-minikube/kicbase...:  244.11 MiB / 287.99 MiB  84.76% 49.03 Mi    > gcr.io/k8s-minikube/kicbase...:  244.73 MiB / 287.99 MiB  84.98% 49.03 Mi    > gcr.io/k8s-minikube/kicbase...:  257.21 MiB / 287.99 MiB  89.31% 47.93 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.
03% 47.93 Mi    > gcr.io/k8s-minikube/kicbase...:  277.01 MiB / 287.99 MiB  96.19% 47.93 MiI1005 21:59:16.529247 1587480 cache.go:157] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1005 21:59:16.529272 1587480 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.94458449s
	I1005 21:59:16.529286 1587480 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1005 21:59:16.529302 1587480 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 45.04 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 45.04 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 45.04 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 42.13 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 42.13 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 41.37 MI1005 21:59:18.173785 1587480 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1005 21:59:18.173795 1587480 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1005 21:59:18.338407 1587480 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1005 21:59:18.338443 1587480 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:59:18.338918 1587480 start.go:365] acquiring machines lock for missing-upgrade-141603: {Name:mk697600e8d8cc60db2b15482c8209622c8c54d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:59:18.339000 1587480 start.go:369] acquired machines lock for "missing-upgrade-141603" in 50.281µs
	I1005 21:59:18.339026 1587480 start.go:96] Skipping create...Using existing machine configuration
	I1005 21:59:18.339033 1587480 fix.go:54] fixHost starting: 
	I1005 21:59:18.339321 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:18.359595 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:18.359655 1587480 fix.go:102] recreateIfNeeded on missing-upgrade-141603: state= err=unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:18.359675 1587480 fix.go:107] machineExists: false. err=machine does not exist
	I1005 21:59:18.361548 1587480 out.go:177] * docker "missing-upgrade-141603" container is missing, will recreate.
	I1005 21:59:18.363677 1587480 delete.go:124] DEMOLISHING missing-upgrade-141603 ...
	I1005 21:59:18.363791 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:18.380766 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	W1005 21:59:18.380828 1587480 stop.go:75] unable to get state: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:18.380848 1587480 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:18.381295 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:18.399660 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:18.399726 1587480 delete.go:82] Unable to get host status for missing-upgrade-141603, assuming it has already been deleted: state: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:18.399792 1587480 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-141603
	W1005 21:59:18.416079 1587480 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-141603 returned with exit code 1
	I1005 21:59:18.416113 1587480 kic.go:367] could not find the container missing-upgrade-141603 to remove it. will try anyways
	I1005 21:59:18.416167 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:18.432353 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	W1005 21:59:18.432423 1587480 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:18.432489 1587480 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-141603 /bin/bash -c "sudo init 0"
	W1005 21:59:18.451759 1587480 cli_runner.go:211] docker exec --privileged -t missing-upgrade-141603 /bin/bash -c "sudo init 0" returned with exit code 1
	I1005 21:59:18.451799 1587480 oci.go:647] error shutdown missing-upgrade-141603: docker exec --privileged -t missing-upgrade-141603 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:19.452019 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:19.468858 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:19.468936 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:19.468954 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:19.468981 1587480 retry.go:31] will retry after 374.960436ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:19.844624 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:19.862214 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:19.862288 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:19.862302 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:19.862338 1587480 retry.go:31] will retry after 848.379443ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:20.711561 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:20.729300 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:20.729391 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:20.729407 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:20.729432 1587480 retry.go:31] will retry after 798.561134ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:21.528237 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:21.546470 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:21.546547 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:21.546561 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:21.546593 1587480 retry.go:31] will retry after 945.879753ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:22.492720 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:22.509678 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:22.509741 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:22.509750 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:22.509780 1587480 retry.go:31] will retry after 1.404447856s: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:23.914382 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:23.930557 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:23.930625 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:23.930640 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:23.930666 1587480 retry.go:31] will retry after 1.997706804s: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:25.928608 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:25.946721 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:25.946787 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:25.946800 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:25.946827 1587480 retry.go:31] will retry after 6.778671622s: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:32.725749 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:32.742445 1587480 cli_runner.go:211] docker container inspect missing-upgrade-141603 --format={{.State.Status}} returned with exit code 1
	I1005 21:59:32.742512 1587480 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	I1005 21:59:32.742527 1587480 oci.go:661] temporary error: container missing-upgrade-141603 status is  but expect it to be exited
	I1005 21:59:32.742560 1587480 oci.go:88] couldn't shut down missing-upgrade-141603 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-141603": docker container inspect missing-upgrade-141603 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-141603
	 
	I1005 21:59:32.742663 1587480 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-141603
	I1005 21:59:32.759440 1587480 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-141603
	W1005 21:59:32.779159 1587480 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-141603 returned with exit code 1
	I1005 21:59:32.780050 1587480 cli_runner.go:164] Run: docker network inspect missing-upgrade-141603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:59:32.797674 1587480 cli_runner.go:164] Run: docker network rm missing-upgrade-141603
	I1005 21:59:32.940083 1587480 fix.go:114] Sleeping 1 second for extra luck!
	I1005 21:59:33.940245 1587480 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:59:33.944872 1587480 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1005 21:59:33.945044 1587480 start.go:159] libmachine.API.Create for "missing-upgrade-141603" (driver="docker")
	I1005 21:59:33.945071 1587480 client.go:168] LocalClient.Create starting
	I1005 21:59:33.945589 1587480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem
	I1005 21:59:33.945638 1587480 main.go:141] libmachine: Decoding PEM data...
	I1005 21:59:33.945657 1587480 main.go:141] libmachine: Parsing certificate...
	I1005 21:59:33.945734 1587480 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem
	I1005 21:59:33.945753 1587480 main.go:141] libmachine: Decoding PEM data...
	I1005 21:59:33.945764 1587480 main.go:141] libmachine: Parsing certificate...
	I1005 21:59:33.946031 1587480 cli_runner.go:164] Run: docker network inspect missing-upgrade-141603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:59:33.964032 1587480 cli_runner.go:211] docker network inspect missing-upgrade-141603 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:59:33.964115 1587480 network_create.go:281] running [docker network inspect missing-upgrade-141603] to gather additional debugging logs...
	I1005 21:59:33.964135 1587480 cli_runner.go:164] Run: docker network inspect missing-upgrade-141603
	W1005 21:59:33.982174 1587480 cli_runner.go:211] docker network inspect missing-upgrade-141603 returned with exit code 1
	I1005 21:59:33.982208 1587480 network_create.go:284] error running [docker network inspect missing-upgrade-141603]: docker network inspect missing-upgrade-141603: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-141603 not found
	I1005 21:59:33.982221 1587480 network_create.go:286] output of [docker network inspect missing-upgrade-141603]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-141603 not found
	
	** /stderr **
	I1005 21:59:33.982334 1587480 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:59:34.007258 1587480 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d16b9e9a692c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:9e:45:13} reservation:<nil>}
	I1005 21:59:34.007609 1587480 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f25a4bc44290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:89:8c:51:03} reservation:<nil>}
	I1005 21:59:34.008097 1587480 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e2970}
	I1005 21:59:34.008119 1587480 network_create.go:124] attempt to create docker network missing-upgrade-141603 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1005 21:59:34.008187 1587480 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-141603 missing-upgrade-141603
	I1005 21:59:34.087972 1587480 network_create.go:108] docker network missing-upgrade-141603 192.168.67.0/24 created
	I1005 21:59:34.088005 1587480 kic.go:117] calculated static IP "192.168.67.2" for the "missing-upgrade-141603" container
	I1005 21:59:34.088094 1587480 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:59:34.105920 1587480 cli_runner.go:164] Run: docker volume create missing-upgrade-141603 --label name.minikube.sigs.k8s.io=missing-upgrade-141603 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:59:34.130126 1587480 oci.go:103] Successfully created a docker volume missing-upgrade-141603
	I1005 21:59:34.130217 1587480 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-141603-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-141603 --entrypoint /usr/bin/test -v missing-upgrade-141603:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1005 21:59:35.798513 1587480 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-141603-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-141603 --entrypoint /usr/bin/test -v missing-upgrade-141603:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.668250185s)
	I1005 21:59:35.798545 1587480 oci.go:107] Successfully prepared a docker volume missing-upgrade-141603
	I1005 21:59:35.798558 1587480 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1005 21:59:35.798706 1587480 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:59:35.798819 1587480 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:59:35.873169 1587480 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-141603 --name missing-upgrade-141603 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-141603 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-141603 --network missing-upgrade-141603 --ip 192.168.67.2 --volume missing-upgrade-141603:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1005 21:59:36.252156 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Running}}
	I1005 21:59:36.275900 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	I1005 21:59:36.303933 1587480 cli_runner.go:164] Run: docker exec missing-upgrade-141603 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:59:36.391877 1587480 oci.go:144] the created container "missing-upgrade-141603" has a running status.
	I1005 21:59:36.391907 1587480 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa...
	I1005 21:59:36.578122 1587480 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:59:36.599509 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	I1005 21:59:36.625621 1587480 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:59:36.625644 1587480 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-141603 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:59:36.748695 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	I1005 21:59:36.771432 1587480 machine.go:88] provisioning docker machine ...
	I1005 21:59:36.771462 1587480 ubuntu.go:169] provisioning hostname "missing-upgrade-141603"
	I1005 21:59:36.771850 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:36.800203 1587480 main.go:141] libmachine: Using SSH client type: native
	I1005 21:59:36.800636 1587480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34276 <nil> <nil>}
	I1005 21:59:36.800656 1587480 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-141603 && echo "missing-upgrade-141603" | sudo tee /etc/hostname
	I1005 21:59:36.801272 1587480 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1005 21:59:39.953875 1587480 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-141603
	
	I1005 21:59:39.953971 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:39.972976 1587480 main.go:141] libmachine: Using SSH client type: native
	I1005 21:59:39.973416 1587480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34276 <nil> <nil>}
	I1005 21:59:39.973441 1587480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-141603' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-141603/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-141603' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:59:40.135253 1587480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:59:40.135278 1587480 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:59:40.135321 1587480 ubuntu.go:177] setting up certificates
	I1005 21:59:40.135331 1587480 provision.go:83] configureAuth start
	I1005 21:59:40.135422 1587480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141603
	I1005 21:59:40.154058 1587480 provision.go:138] copyHostCerts
	I1005 21:59:40.154130 1587480 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 21:59:40.154144 1587480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:59:40.154223 1587480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:59:40.154322 1587480 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 21:59:40.154332 1587480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:59:40.154361 1587480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:59:40.154424 1587480 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 21:59:40.154435 1587480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:59:40.154461 1587480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:59:40.154511 1587480 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-141603 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-141603]
	I1005 21:59:40.700030 1587480 provision.go:172] copyRemoteCerts
	I1005 21:59:40.700099 1587480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:59:40.700140 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:40.719986 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	I1005 21:59:40.818686 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:59:40.843297 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1005 21:59:40.866105 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1005 21:59:40.888662 1587480 provision.go:86] duration metric: configureAuth took 753.316322ms
	I1005 21:59:40.888729 1587480 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:59:40.888931 1587480 config.go:182] Loaded profile config "missing-upgrade-141603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1005 21:59:40.889041 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:40.908150 1587480 main.go:141] libmachine: Using SSH client type: native
	I1005 21:59:40.908563 1587480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34276 <nil> <nil>}
	I1005 21:59:40.908584 1587480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:59:41.340714 1587480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:59:41.340745 1587480 machine.go:91] provisioned docker machine in 4.56929318s
	I1005 21:59:41.340755 1587480 client.go:171] LocalClient.Create took 7.395677406s
	I1005 21:59:41.340765 1587480 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-141603" took 7.395723248s
	I1005 21:59:41.340772 1587480 start.go:300] post-start starting for "missing-upgrade-141603" (driver="docker")
	I1005 21:59:41.340782 1587480 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:59:41.340853 1587480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:59:41.340898 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:41.360202 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	I1005 21:59:41.458470 1587480 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:59:41.462963 1587480 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:59:41.462991 1587480 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:59:41.463002 1587480 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:59:41.463010 1587480 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1005 21:59:41.463020 1587480 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:59:41.463087 1587480 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:59:41.463170 1587480 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 21:59:41.463278 1587480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:59:41.472109 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:59:41.495404 1587480 start.go:303] post-start completed in 154.615758ms
	I1005 21:59:41.495787 1587480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141603
	I1005 21:59:41.514425 1587480 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/missing-upgrade-141603/config.json ...
	I1005 21:59:41.514695 1587480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:59:41.514747 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:41.532939 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	I1005 21:59:41.637962 1587480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:59:41.645752 1587480 start.go:128] duration metric: createHost completed in 7.705464705s
	I1005 21:59:41.645840 1587480 cli_runner.go:164] Run: docker container inspect missing-upgrade-141603 --format={{.State.Status}}
	W1005 21:59:41.677955 1587480 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 21:59:41.677986 1587480 machine.go:88] provisioning docker machine ...
	I1005 21:59:41.678004 1587480 ubuntu.go:169] provisioning hostname "missing-upgrade-141603"
	I1005 21:59:41.678070 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:41.700434 1587480 main.go:141] libmachine: Using SSH client type: native
	I1005 21:59:41.700842 1587480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34276 <nil> <nil>}
	I1005 21:59:41.700855 1587480 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-141603 && echo "missing-upgrade-141603" | sudo tee /etc/hostname
	I1005 21:59:41.862249 1587480 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-141603
	
	I1005 21:59:41.862398 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:41.887497 1587480 main.go:141] libmachine: Using SSH client type: native
	I1005 21:59:41.887900 1587480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34276 <nil> <nil>}
	I1005 21:59:41.887920 1587480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-141603' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-141603/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-141603' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:59:42.037507 1587480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:59:42.037532 1587480 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:59:42.037554 1587480 ubuntu.go:177] setting up certificates
	I1005 21:59:42.037563 1587480 provision.go:83] configureAuth start
	I1005 21:59:42.037670 1587480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141603
	I1005 21:59:42.086254 1587480 provision.go:138] copyHostCerts
	I1005 21:59:42.086356 1587480 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 21:59:42.086374 1587480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:59:42.086478 1587480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:59:42.086606 1587480 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 21:59:42.086617 1587480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:59:42.086649 1587480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:59:42.086719 1587480 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 21:59:42.086729 1587480 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:59:42.086764 1587480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:59:42.086878 1587480 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-141603 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-141603]
	I1005 21:59:42.795911 1587480 provision.go:172] copyRemoteCerts
	I1005 21:59:42.795975 1587480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:59:42.796019 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:42.818794 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	I1005 21:59:42.918688 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:59:42.942274 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1005 21:59:42.965506 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 21:59:42.989725 1587480 provision.go:86] duration metric: configureAuth took 952.147369ms
	I1005 21:59:42.989793 1587480 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:59:42.990015 1587480 config.go:182] Loaded profile config "missing-upgrade-141603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1005 21:59:42.990135 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:43.015254 1587480 main.go:141] libmachine: Using SSH client type: native
	I1005 21:59:43.015671 1587480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34276 <nil> <nil>}
	I1005 21:59:43.015688 1587480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:59:43.371230 1587480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:59:43.371252 1587480 machine.go:91] provisioned docker machine in 1.693257884s
	I1005 21:59:43.371263 1587480 start.go:300] post-start starting for "missing-upgrade-141603" (driver="docker")
	I1005 21:59:43.371282 1587480 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:59:43.371360 1587480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:59:43.371403 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:43.392818 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	I1005 21:59:43.494717 1587480 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:59:43.498729 1587480 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:59:43.498756 1587480 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:59:43.498767 1587480 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:59:43.498774 1587480 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1005 21:59:43.498784 1587480 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:59:43.498842 1587480 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:59:43.498924 1587480 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 21:59:43.499037 1587480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:59:43.508031 1587480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:59:43.531844 1587480 start.go:303] post-start completed in 160.564655ms
	I1005 21:59:43.531966 1587480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:59:43.532017 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:43.550834 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	I1005 21:59:43.647954 1587480 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:59:43.653719 1587480 fix.go:56] fixHost completed within 25.314671004s
	I1005 21:59:43.653754 1587480 start.go:83] releasing machines lock for "missing-upgrade-141603", held for 25.314739886s
	I1005 21:59:43.653827 1587480 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-141603
	I1005 21:59:43.672468 1587480 ssh_runner.go:195] Run: cat /version.json
	I1005 21:59:43.672484 1587480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:59:43.672525 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:43.672566 1587480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-141603
	I1005 21:59:43.709161 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	I1005 21:59:43.711940 1587480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34276 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/missing-upgrade-141603/id_rsa Username:docker}
	W1005 21:59:43.806779 1587480 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1005 21:59:43.806925 1587480 ssh_runner.go:195] Run: systemctl --version
	I1005 21:59:43.919269 1587480 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 21:59:44.028490 1587480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:59:44.034427 1587480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:59:44.061105 1587480 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:59:44.061255 1587480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:59:44.095667 1587480 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1005 21:59:44.095739 1587480 start.go:469] detecting cgroup driver to use...
	I1005 21:59:44.095785 1587480 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:59:44.095863 1587480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 21:59:44.130236 1587480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 21:59:44.142612 1587480 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:59:44.142713 1587480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:59:44.154537 1587480 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:59:44.166927 1587480 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1005 21:59:44.180254 1587480 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1005 21:59:44.180321 1587480 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:59:44.294113 1587480 docker.go:213] disabling docker service ...
	I1005 21:59:44.294175 1587480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:59:44.307839 1587480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:59:44.320639 1587480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:59:44.425752 1587480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:59:44.535002 1587480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:59:44.548110 1587480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:59:44.565410 1587480 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1005 21:59:44.565477 1587480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:59:44.578596 1587480 out.go:177] 
	W1005 21:59:44.580317 1587480 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1005 21:59:44.580340 1587480 out.go:239] * 
	* 
	W1005 21:59:44.581494 1587480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 21:59:44.584025 1587480 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-141603 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-10-05 21:59:44.632209158 +0000 UTC m=+2695.888144270
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-141603
helpers_test.go:235: (dbg) docker inspect missing-upgrade-141603:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "90e3dcd17703dbbe1cbc49f81435af169647bf8871ec4609c329c3eb069a1630",
	        "Created": "2023-10-05T21:59:35.892918839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1588456,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:59:36.243038668Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/90e3dcd17703dbbe1cbc49f81435af169647bf8871ec4609c329c3eb069a1630/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/90e3dcd17703dbbe1cbc49f81435af169647bf8871ec4609c329c3eb069a1630/hostname",
	        "HostsPath": "/var/lib/docker/containers/90e3dcd17703dbbe1cbc49f81435af169647bf8871ec4609c329c3eb069a1630/hosts",
	        "LogPath": "/var/lib/docker/containers/90e3dcd17703dbbe1cbc49f81435af169647bf8871ec4609c329c3eb069a1630/90e3dcd17703dbbe1cbc49f81435af169647bf8871ec4609c329c3eb069a1630-json.log",
	        "Name": "/missing-upgrade-141603",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-141603:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-141603",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6748fe1a5a20e32c463c90ed86df39e0c7f30c4f2944f0975fb518e9ccf086fa-init/diff:/var/lib/docker/overlay2/70a2eef69f5924b2e4e9e9526112a2b952b2b35be17f77adf6aa0b9a3dd7e22a/diff:/var/lib/docker/overlay2/af9d36c6a4bd0a3924783bac935ec107d3095f2b519310846ade5251518c98ef/diff:/var/lib/docker/overlay2/979cdc1cf91bceced0f85e99b898db1dae6a9b6762b30b95b10addaf6423c536/diff:/var/lib/docker/overlay2/d070351e27713aac4fab08c1c44eabca70ff7f505a664a27fcf91d5aea09f7ed/diff:/var/lib/docker/overlay2/5cd4d36b70785544241bb24415e3cbd7dd52f8e7c338c53bd8d635df4e3da9cc/diff:/var/lib/docker/overlay2/dbeedb846e538238987e33e03bdb05a41f3b9a0170c00cb6b6561729a08d135b/diff:/var/lib/docker/overlay2/6fa1369a8cd12d1c050e638c268f39ed0448c71148d3f3b218a3c01102946a56/diff:/var/lib/docker/overlay2/a06551f2c598928cc271781048fca53c19d3c4cb664fd21c45025e2125060d91/diff:/var/lib/docker/overlay2/0280acd03f69641fccd4ea3aed3751d626843af80c1a57fb3494611fa7bf0b1f/diff:/var/lib/docker/overlay2/6c6c95
eaee379ce8a28809a4868a87066777afc43021c3641c81ae5400777214/diff:/var/lib/docker/overlay2/d9dcc7bbe3f607dbb59c83681732ccc22dc0182c42eb1a9608868c49c440e40c/diff:/var/lib/docker/overlay2/743df4f08051865b005f435b7fc4fb8146d74a2c5ea247332bc6652b9520d402/diff:/var/lib/docker/overlay2/e0d772ba6f56854125b0a4210e4218b61d186c6d48a8e487010b6183036ede43/diff:/var/lib/docker/overlay2/76e3339af70d89020f271a12ac8b5e92a633b28d409ee03658d5af7ae7666e3a/diff:/var/lib/docker/overlay2/6727005e6b558b2c3e69204bf5a0afa200365762e6fb8e2d02631331229f4e35/diff:/var/lib/docker/overlay2/e5f7756a5067fb5587a4a754af85a6855e1117348adf0373404b45e1d290797b/diff:/var/lib/docker/overlay2/d4782b310e72c39eb6a286f94e02d78587fe512772539a3834eceb1ac60dcf6e/diff:/var/lib/docker/overlay2/9e3a170376b712ab58ba869eb0dfbd3033c4b605601569ddd004bde130c5f2e7/diff:/var/lib/docker/overlay2/833421aad7959eccdbe7301ab45d4fdea6dd74718a3a353bc462712c98729c1b/diff:/var/lib/docker/overlay2/cc7131cc56da7a1c8b12decd318dc462b887129b9258075bb3166352fb45c7c8/diff:/var/lib/d
ocker/overlay2/267be2bb5a0feae2b3e06782b96d9c1316acbd6688b3d1b70b68ffeb5f6e9491/diff:/var/lib/docker/overlay2/b10c2345cdd924c1ea1e7210a6081cf9b11b74fbc100118c52a2e9c9a297b78f/diff:/var/lib/docker/overlay2/c4fd8708e543148b3b00223dedbe48c51e759d3f18f7782eb728b8e076bc51fb/diff:/var/lib/docker/overlay2/88babb18f95f26f6cb8b5a9865dbe9d27d2e8c122abbba737d8b43f61d9ed598/diff:/var/lib/docker/overlay2/a179201e885c6c10822bb20098f0929895d2ca26d204e868dcddb487958c3a53/diff:/var/lib/docker/overlay2/dd9579dd0776a98893059e040e23b363f47f1d4d0e4af63ad0844d849fc2e252/diff:/var/lib/docker/overlay2/980075aae651fa08cda551cc5f311485ad3f16222ac4a69744f4ffe535599a0d/diff:/var/lib/docker/overlay2/33bb250c1369683bbbc9551ee7de3c21495849548e65ece67451f9387c15cb63/diff:/var/lib/docker/overlay2/63818b72506020223fbb1e0cc69e043fec29fa4c86efc6fd9bec566aec176e69/diff:/var/lib/docker/overlay2/9bc34992e925b1f18a50cb2b274007f1574b0fa9ff70618e1e2a34c18d069a9f/diff:/var/lib/docker/overlay2/3fbb11e873f71a92aeba423f4a77ac9cde9b3c5afbdcc90a161f308f3d6
e4277/diff:/var/lib/docker/overlay2/6a6d5048aedae7ff0f26ac31da1ec54e5be390bc91ac62f4aa6b86d24eae50b9/diff:/var/lib/docker/overlay2/f8881ed1f2fe571d2cab86c32e750d36d8c4040b31be6d25fc86f4adbe003247/diff:/var/lib/docker/overlay2/1333b3a368087330c0ead2fc9ad4431509b0e9566c2f46d92314db4a760541a0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6748fe1a5a20e32c463c90ed86df39e0c7f30c4f2944f0975fb518e9ccf086fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6748fe1a5a20e32c463c90ed86df39e0c7f30c4f2944f0975fb518e9ccf086fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6748fe1a5a20e32c463c90ed86df39e0c7f30c4f2944f0975fb518e9ccf086fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-141603",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-141603/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-141603",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-141603",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-141603",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0904056c8b7e417335e4bdce9e881a0b0bb07afbbef2448a2765dd65d9f95ccd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34276"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34272"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34274"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34273"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0904056c8b7e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-141603": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "90e3dcd17703",
	                        "missing-upgrade-141603"
	                    ],
	                    "NetworkID": "0fd222e908d837de3c6add4fec2947777e73d44da2b2a0b5b02b3fdd65ddffae",
	                    "EndpointID": "8a7e2fd7029a700f01aae527ea51b22ee99d51b47d2876ccd5fba5a7567bbbef",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-141603 -n missing-upgrade-141603
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-141603 -n missing-upgrade-141603: exit status 6 (319.548103ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 21:59:44.958209 1589655 status.go:415] kubeconfig endpoint: got: 192.168.59.36:8443, want: 192.168.67.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-141603" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-141603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-141603
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-141603: (2.380461407s)
--- FAIL: TestMissingContainerUpgrade (130.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (80.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-235090 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-235090 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m13.228695688s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-235090] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-235090 in cluster pause-235090
	* Pulling base image ...
	* Updating the running docker "pause-235090" container ...
	* Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-235090" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:52:04.728502 1559793 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:52:04.728773 1559793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:52:04.728800 1559793 out.go:309] Setting ErrFile to fd 2...
	I1005 21:52:04.728818 1559793 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:52:04.729095 1559793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:52:04.729510 1559793 out.go:303] Setting JSON to false
	I1005 21:52:04.730626 1559793 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27272,"bootTime":1696515453,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:52:04.730731 1559793 start.go:138] virtualization:  
	I1005 21:52:04.733452 1559793 out.go:177] * [pause-235090] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:52:04.735443 1559793 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:52:04.737203 1559793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:52:04.735595 1559793 notify.go:220] Checking for updates...
	I1005 21:52:04.740863 1559793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:52:04.742698 1559793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:52:04.744479 1559793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:52:04.746087 1559793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:52:04.748464 1559793 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:52:04.749042 1559793 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:52:04.779610 1559793 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:52:04.779715 1559793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:52:04.933128 1559793 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 21:52:04.922508287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:52:04.933233 1559793 docker.go:294] overlay module found
	I1005 21:52:04.936686 1559793 out.go:177] * Using the docker driver based on existing profile
	I1005 21:52:04.938452 1559793 start.go:298] selected driver: docker
	I1005 21:52:04.938474 1559793 start.go:902] validating driver "docker" against &{Name:pause-235090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-235090 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:52:04.938623 1559793 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:52:04.938760 1559793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:52:05.064918 1559793 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 21:52:05.052569352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:52:05.065416 1559793 cni.go:84] Creating CNI manager for ""
	I1005 21:52:05.065435 1559793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:52:05.065456 1559793 start_flags.go:321] config:
	{Name:pause-235090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-235090 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-p
rovisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:52:05.068255 1559793 out.go:177] * Starting control plane node pause-235090 in cluster pause-235090
	I1005 21:52:05.070073 1559793 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:52:05.072016 1559793 out.go:177] * Pulling base image ...
	I1005 21:52:05.073786 1559793 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:52:05.073842 1559793 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1005 21:52:05.073864 1559793 cache.go:57] Caching tarball of preloaded images
	I1005 21:52:05.073943 1559793 preload.go:174] Found /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1005 21:52:05.073957 1559793 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 21:52:05.074094 1559793 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/config.json ...
	I1005 21:52:05.074336 1559793 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:52:05.111998 1559793 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 21:52:05.112033 1559793 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 21:52:05.112079 1559793 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:52:05.112157 1559793 start.go:365] acquiring machines lock for pause-235090: {Name:mkd72bf1ee45d2a9ac4502257913404a49fe04e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:52:05.112271 1559793 start.go:369] acquired machines lock for "pause-235090" in 61.817µs
	I1005 21:52:05.112315 1559793 start.go:96] Skipping create...Using existing machine configuration
	I1005 21:52:05.112325 1559793 fix.go:54] fixHost starting: 
	I1005 21:52:05.112714 1559793 cli_runner.go:164] Run: docker container inspect pause-235090 --format={{.State.Status}}
	I1005 21:52:05.144188 1559793 fix.go:102] recreateIfNeeded on pause-235090: state=Running err=<nil>
	W1005 21:52:05.144230 1559793 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 21:52:05.146921 1559793 out.go:177] * Updating the running docker "pause-235090" container ...
	I1005 21:52:05.150125 1559793 machine.go:88] provisioning docker machine ...
	I1005 21:52:05.150180 1559793 ubuntu.go:169] provisioning hostname "pause-235090"
	I1005 21:52:05.150256 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:05.171511 1559793 main.go:141] libmachine: Using SSH client type: native
	I1005 21:52:05.171981 1559793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1005 21:52:05.172002 1559793 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-235090 && echo "pause-235090" | sudo tee /etc/hostname
	I1005 21:52:05.343710 1559793 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-235090
	
	I1005 21:52:05.343792 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:05.365402 1559793 main.go:141] libmachine: Using SSH client type: native
	I1005 21:52:05.365834 1559793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1005 21:52:05.365853 1559793 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-235090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-235090/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-235090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:52:05.504145 1559793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:52:05.504172 1559793 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:52:05.504203 1559793 ubuntu.go:177] setting up certificates
	I1005 21:52:05.504213 1559793 provision.go:83] configureAuth start
	I1005 21:52:05.504276 1559793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-235090
	I1005 21:52:05.531447 1559793 provision.go:138] copyHostCerts
	I1005 21:52:05.531509 1559793 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 21:52:05.531523 1559793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:52:05.531584 1559793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:52:05.531688 1559793 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 21:52:05.531700 1559793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:52:05.531728 1559793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:52:05.531788 1559793 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 21:52:05.531798 1559793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:52:05.531818 1559793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:52:05.531861 1559793 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.pause-235090 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-235090]
	I1005 21:52:07.006200 1559793 provision.go:172] copyRemoteCerts
	I1005 21:52:07.006293 1559793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:52:07.006360 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:07.040414 1559793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/pause-235090/id_rsa Username:docker}
	I1005 21:52:07.167182 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:52:07.206227 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1005 21:52:07.248075 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 21:52:07.288989 1559793 provision.go:86] duration metric: configureAuth took 1.784760931s
	I1005 21:52:07.289013 1559793 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:52:07.289240 1559793 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:52:07.289388 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:07.319697 1559793 main.go:141] libmachine: Using SSH client type: native
	I1005 21:52:07.320175 1559793 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1005 21:52:07.320199 1559793 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:52:12.786743 1559793 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:52:12.786769 1559793 machine.go:91] provisioned docker machine in 7.636616395s
	I1005 21:52:12.786780 1559793 start.go:300] post-start starting for "pause-235090" (driver="docker")
	I1005 21:52:12.786792 1559793 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:52:12.786860 1559793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:52:12.786904 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:12.808551 1559793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/pause-235090/id_rsa Username:docker}
	I1005 21:52:12.910906 1559793 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:52:12.916167 1559793 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:52:12.916201 1559793 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:52:12.916213 1559793 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:52:12.916223 1559793 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:52:12.916237 1559793 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:52:12.916294 1559793 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:52:12.916384 1559793 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 21:52:12.916492 1559793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:52:12.928805 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:52:12.964434 1559793 start.go:303] post-start completed in 177.63678ms
	I1005 21:52:12.964508 1559793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:52:12.964547 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:12.991689 1559793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/pause-235090/id_rsa Username:docker}
	I1005 21:52:13.087915 1559793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:52:13.094160 1559793 fix.go:56] fixHost completed within 7.981827675s
	I1005 21:52:13.094182 1559793 start.go:83] releasing machines lock for "pause-235090", held for 7.981883231s
	I1005 21:52:13.094255 1559793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-235090
	I1005 21:52:13.120400 1559793 ssh_runner.go:195] Run: cat /version.json
	I1005 21:52:13.120451 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:13.120668 1559793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:52:13.120702 1559793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-235090
	I1005 21:52:13.159691 1559793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/pause-235090/id_rsa Username:docker}
	I1005 21:52:13.161146 1559793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/pause-235090/id_rsa Username:docker}
	I1005 21:52:13.401394 1559793 ssh_runner.go:195] Run: systemctl --version
	I1005 21:52:13.408063 1559793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 21:52:13.576468 1559793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 21:52:13.583425 1559793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:52:13.595705 1559793 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 21:52:13.595791 1559793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 21:52:13.607234 1559793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1005 21:52:13.607261 1559793 start.go:469] detecting cgroup driver to use...
	I1005 21:52:13.607293 1559793 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 21:52:13.607345 1559793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 21:52:13.623860 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 21:52:13.640708 1559793 docker.go:197] disabling cri-docker service (if available) ...
	I1005 21:52:13.640780 1559793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 21:52:13.659536 1559793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 21:52:13.675244 1559793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1005 21:52:13.851786 1559793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 21:52:14.018189 1559793 docker.go:213] disabling docker service ...
	I1005 21:52:14.018272 1559793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 21:52:14.034304 1559793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 21:52:14.050441 1559793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 21:52:14.224843 1559793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 21:52:14.383867 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 21:52:14.399062 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 21:52:14.426926 1559793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1005 21:52:14.427022 1559793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:52:14.441914 1559793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1005 21:52:14.442004 1559793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:52:14.456081 1559793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:52:14.471496 1559793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 21:52:14.501948 1559793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1005 21:52:14.526043 1559793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1005 21:52:14.556515 1559793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1005 21:52:14.612857 1559793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1005 21:52:15.108044 1559793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1005 21:52:15.600240 1559793 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1005 21:52:15.600307 1559793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1005 21:52:15.628817 1559793 start.go:537] Will wait 60s for crictl version
	I1005 21:52:15.628879 1559793 ssh_runner.go:195] Run: which crictl
	I1005 21:52:15.636127 1559793 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1005 21:52:15.767645 1559793 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1005 21:52:15.767736 1559793 ssh_runner.go:195] Run: crio --version
	I1005 21:52:15.832892 1559793 ssh_runner.go:195] Run: crio --version
	I1005 21:52:15.895455 1559793 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1005 21:52:15.897433 1559793 cli_runner.go:164] Run: docker network inspect pause-235090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:52:15.916383 1559793 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1005 21:52:15.921525 1559793 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:52:15.921588 1559793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:52:15.978039 1559793 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 21:52:15.978078 1559793 crio.go:415] Images already preloaded, skipping extraction
	I1005 21:52:15.978136 1559793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1005 21:52:16.029745 1559793 crio.go:496] all images are preloaded for cri-o runtime.
	I1005 21:52:16.029772 1559793 cache_images.go:84] Images are preloaded, skipping loading
	I1005 21:52:16.029867 1559793 ssh_runner.go:195] Run: crio config
	I1005 21:52:16.115517 1559793 cni.go:84] Creating CNI manager for ""
	I1005 21:52:16.115538 1559793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:52:16.115559 1559793 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1005 21:52:16.115578 1559793 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-235090 NodeName:pause-235090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1005 21:52:16.115716 1559793 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-235090"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1005 21:52:16.115782 1559793 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-235090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-235090 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1005 21:52:16.115844 1559793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1005 21:52:16.128673 1559793 binaries.go:44] Found k8s binaries, skipping transfer
	I1005 21:52:16.128783 1559793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1005 21:52:16.140944 1559793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1005 21:52:16.173835 1559793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1005 21:52:16.204072 1559793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1005 21:52:16.227102 1559793 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1005 21:52:16.232250 1559793 certs.go:56] Setting up /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090 for IP: 192.168.67.2
	I1005 21:52:16.232325 1559793 certs.go:190] acquiring lock for shared ca certs: {Name:mkfac5d4c0ae883432caac512ac8160283213d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:52:16.232508 1559793 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key
	I1005 21:52:16.232574 1559793 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key
	I1005 21:52:16.232677 1559793 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key
	I1005 21:52:16.232783 1559793 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/apiserver.key.c7fa3a9e
	I1005 21:52:16.232853 1559793 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/proxy-client.key
	I1005 21:52:16.232995 1559793 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem (1338 bytes)
	W1005 21:52:16.233048 1559793 certs.go:433] ignoring /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786_empty.pem, impossibly tiny 0 bytes
	I1005 21:52:16.233074 1559793 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem (1679 bytes)
	I1005 21:52:16.233130 1559793 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem (1082 bytes)
	I1005 21:52:16.233180 1559793 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem (1123 bytes)
	I1005 21:52:16.233241 1559793 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem (1675 bytes)
	I1005 21:52:16.233314 1559793 certs.go:437] found cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:52:16.233986 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1005 21:52:16.265368 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1005 21:52:16.296784 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1005 21:52:16.332598 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1005 21:52:16.362000 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1005 21:52:16.394593 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1005 21:52:16.431968 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1005 21:52:16.466190 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1005 21:52:16.496652 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/1453786.pem --> /usr/share/ca-certificates/1453786.pem (1338 bytes)
	I1005 21:52:16.527212 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /usr/share/ca-certificates/14537862.pem (1708 bytes)
	I1005 21:52:16.557779 1559793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1005 21:52:16.588836 1559793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1005 21:52:16.612375 1559793 ssh_runner.go:195] Run: openssl version
	I1005 21:52:16.620795 1559793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1453786.pem && ln -fs /usr/share/ca-certificates/1453786.pem /etc/ssl/certs/1453786.pem"
	I1005 21:52:16.633471 1559793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1453786.pem
	I1005 21:52:16.639212 1559793 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  5 21:22 /usr/share/ca-certificates/1453786.pem
	I1005 21:52:16.639320 1559793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1453786.pem
	I1005 21:52:16.649426 1559793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1453786.pem /etc/ssl/certs/51391683.0"
	I1005 21:52:16.662209 1559793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14537862.pem && ln -fs /usr/share/ca-certificates/14537862.pem /etc/ssl/certs/14537862.pem"
	I1005 21:52:16.702734 1559793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14537862.pem
	I1005 21:52:16.708708 1559793 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  5 21:22 /usr/share/ca-certificates/14537862.pem
	I1005 21:52:16.708776 1559793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14537862.pem
	I1005 21:52:16.717953 1559793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14537862.pem /etc/ssl/certs/3ec20f2e.0"
	I1005 21:52:16.730105 1559793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1005 21:52:16.743125 1559793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:52:16.748400 1559793 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  5 21:15 /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:52:16.748462 1559793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1005 21:52:16.757877 1559793 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1005 21:52:16.770053 1559793 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1005 21:52:16.776517 1559793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1005 21:52:16.785796 1559793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1005 21:52:16.798264 1559793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1005 21:52:16.811233 1559793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1005 21:52:16.820053 1559793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1005 21:52:16.829159 1559793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1005 21:52:16.842178 1559793 kubeadm.go:404] StartCluster: {Name:pause-235090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-235090 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:52:16.842293 1559793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1005 21:52:16.842356 1559793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:52:16.912321 1559793 cri.go:89] found id: "ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e"
	I1005 21:52:16.912342 1559793 cri.go:89] found id: "2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5"
	I1005 21:52:16.912348 1559793 cri.go:89] found id: "ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df"
	I1005 21:52:16.912353 1559793 cri.go:89] found id: "a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8"
	I1005 21:52:16.912357 1559793 cri.go:89] found id: "f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a"
	I1005 21:52:16.912361 1559793 cri.go:89] found id: "760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806"
	I1005 21:52:16.912365 1559793 cri.go:89] found id: "4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6"
	I1005 21:52:16.912369 1559793 cri.go:89] found id: ""
	I1005 21:52:16.912418 1559793 ssh_runner.go:195] Run: sudo runc list -f json
	I1005 21:52:16.943851 1559793 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5/userdata","rootfs":"/var/lib/containers/storage/overlay/33088530d57447b848d0a7b86622d3de474e2d3510de2e41cd0b67c70350fbae/merged","created":"2023-10-05T21:52:14.859619233Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b739d865","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b739d865\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-05T21:52:14.546123416Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-ntfxs\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d6f70b29-95e2-4894-95d2-97463d8af989\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-ntfxs_d6f70b29-95e2-4894-95d2-97463d8af989/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-
o.MountPoint":"/var/lib/containers/storage/overlay/33088530d57447b848d0a7b86622d3de474e2d3510de2e41cd0b67c70350fbae/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-ntfxs_kube-system_d6f70b29-95e2-4894-95d2-97463d8af989_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/600beb5cc0b7b20340c07b218da8184f72bffa203f2a845790c78c374be4adab/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"600beb5cc0b7b20340c07b218da8184f72bffa203f2a845790c78c374be4adab","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-ntfxs_kube-system_d6f70b29-95e2-4894-95d2-97463d8af989_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propag
ation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d6f70b29-95e2-4894-95d2-97463d8af989/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d6f70b29-95e2-4894-95d2-97463d8af989/containers/kindnet-cni/640b3aa8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d6f70b29-95e2-4894-95d2-97463d8af989/volumes/kubernetes.io~projected/kube-api-access-kg2jl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-ntfxs","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d6f70b29-95e2-4894-95d2
-97463d8af989","kubernetes.io/config.seen":"2023-10-05T21:51:59.548551553Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6/userdata","rootfs":"/var/lib/containers/storage/overlay/966d50340fbde0b108fd9b2d370cdefc39f337fa07783550afddf209a46859df/merged","created":"2023-10-05T21:51:34.398961051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"66541c94","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"66541c94\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessa
gePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-05T21:51:34.224288708Z","io.kubernetes.cri-o.Image":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.2","io.kubernetes.cri-o.ImageRef":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-235090\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8910d5deb2c1ed16d8b1c04887a5e3e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-235090_8910d5deb2c1ed16d8b1c04887a5e3e2/kube-scheduler/0.log","
io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/966d50340fbde0b108fd9b2d370cdefc39f337fa07783550afddf209a46859df/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-235090_kube-system_8910d5deb2c1ed16d8b1c04887a5e3e2_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c820472ec9b8a724c1ae0fd1f12cdc4279b5853d3cde7336552a48c8bd57900d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c820472ec9b8a724c1ae0fd1f12cdc4279b5853d3cde7336552a48c8bd57900d","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-235090_kube-system_8910d5deb2c1ed16d8b1c04887a5e3e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8910d5deb2c1ed16d8b1c04887a5e3e2/etc-hosts\",\"readonly\":f
alse,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8910d5deb2c1ed16d8b1c04887a5e3e2/containers/kube-scheduler/668872f9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-235090","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8910d5deb2c1ed16d8b1c04887a5e3e2","kubernetes.io/config.hash":"8910d5deb2c1ed16d8b1c04887a5e3e2","kubernetes.io/config.seen":"2023-10-05T21:51:33.640230605Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/760e356ac45fc228eebc7bbf41
36c576558594ad8651f820ba16e5c64d85f806/userdata","rootfs":"/var/lib/containers/storage/overlay/2f9ee1485a16ca7b8cca2ac14e99283984829da3505ebe3de65237fab329b489/merged","created":"2023-10-05T21:51:34.466766807Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3f60172","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3f60172\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-
10-05T21:51:34.25700979Z","io.kubernetes.cri-o.Image":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.2","io.kubernetes.cri-o.ImageRef":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-235090\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"fa7707dfff03155a822560d73e7c9ce2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-235090_fa7707dfff03155a822560d73e7c9ce2/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2f9ee1485a16ca7b8cca2ac14e99283984829da3505ebe3de65237fab329b489/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-235090_kube-system_fa7707dfff03155a822560d73e7c9ce2_0","io.kube
rnetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/51f7a2b5b4427c4f52d03dbb12e0ce68fcfdbb0027c4ef8f7f2d8a37bf54e224/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"51f7a2b5b4427c4f52d03dbb12e0ce68fcfdbb0027c4ef8f7f2d8a37bf54e224","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-235090_kube-system_fa7707dfff03155a822560d73e7c9ce2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/fa7707dfff03155a822560d73e7c9ce2/containers/kube-apiserver/9069104a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/fa7707dfff03155a822560d73e7c9ce
2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-235090","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"fa7707dfff03155a822560d73e7c9ce2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"fa770
7dfff03155a822560d73e7c9ce2","kubernetes.io/config.seen":"2023-10-05T21:51:33.640222613Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8/userdata","rootfs":"/var/lib/containers/storage/overlay/a9354738302efc736389a89a4f463716bd9773cf0bf654e12d39ddaddd59bd5e/merged","created":"2023-10-05T21:52:01.817510149Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"64b2ee69","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/terminati
on-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"64b2ee69\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-05T21:52:01.780561298Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb10
8","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-84s28\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f9362fc7-f2d0-411f-a717-fa70ffafabcb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-84s28_f9362fc7-f2d0-411f-a717-fa70ffafabcb/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a9354738302efc736389a89a4f463716bd9773cf0bf654e12d39ddaddd59bd5e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-84s28_kube-system_f9362fc7-f2d0-411f-a717-fa70ffafabcb_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/932929e3687b0a33fadf97d4460547523107972997c6f040e3e0cac5fef44c70/userdata
/resolv.conf","io.kubernetes.cri-o.SandboxID":"932929e3687b0a33fadf97d4460547523107972997c6f040e3e0cac5fef44c70","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-84s28_kube-system_f9362fc7-f2d0-411f-a717-fa70ffafabcb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/f9362fc7-f2d0-411f-a717-fa70ffafabcb/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f9362fc7-f2d0-411f-a717-fa70ffafabcb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f9362fc7-f2d0-411f-a717-fa70ffafabcb/containers/coredns/acaebe92\",\"readonly\":false,\"propagation\":0,\"selinux_rel
abel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f9362fc7-f2d0-411f-a717-fa70ffafabcb/volumes/kubernetes.io~projected/kube-api-access-96999\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-84s28","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f9362fc7-f2d0-411f-a717-fa70ffafabcb","kubernetes.io/config.seen":"2023-10-05T21:52:01.395433505Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e/userdata","rootfs":"/var/lib/containers/storage/overlay/d634ce368fded0c9a77c23dc764c4df93b1754e3251c9ab69b6047d02d32c900/merged","created":"2023-10-05T21:52:15.139147603Z","annotat
ions":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1390b31d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1390b31d\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-05T21:52:14.602993577Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f4
8c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-235090\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d62e2276a9bf4171098cffa39a255eb9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-235090_d62e2276a9bf4171098cffa39a255eb9/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d634ce368fded0c9a77c23dc764c4df93b1754e3251c9ab69b6047d02d32c900/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-235090_kube-system_d62e2276a9bf4171098cffa39a255eb9_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6da149e9221cd6eae24778047ad9f6fb68017dea733fe8d052612ed71a1d091c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6da149e9221cd6eae24778047ad9f6fb68017dea733fe8d052612ed71a1d091c","io.kubernetes.cri-o.SandboxName":"k8
s_etcd-pause-235090_kube-system_d62e2276a9bf4171098cffa39a255eb9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d62e2276a9bf4171098cffa39a255eb9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d62e2276a9bf4171098cffa39a255eb9/containers/etcd/fcada546\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-235090","io.kubernetes.pod
.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d62e2276a9bf4171098cffa39a255eb9","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"d62e2276a9bf4171098cffa39a255eb9","kubernetes.io/config.seen":"2023-10-05T21:51:33.640215565Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df/userdata","rootfs":"/var/lib/containers/storage/overlay/4e64e147be638775fda27ea2e3663dcd620395bc651ec8aa79c0dec20993f020/merged","created":"2023-10-05T21:52:14.658458095Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8191d4c4","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessageP
ath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8191d4c4\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-05T21:52:14.53675999Z","io.kubernetes.cri-o.Image":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.2","io.kubernetes.cri-o.ImageRef":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-q7sdt\",\"io.kubernetes.p
od.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f45facf4-987f-4d09-bc27-1f5cd7879216\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-q7sdt_f45facf4-987f-4d09-bc27-1f5cd7879216/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4e64e147be638775fda27ea2e3663dcd620395bc651ec8aa79c0dec20993f020/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-q7sdt_kube-system_f45facf4-987f-4d09-bc27-1f5cd7879216_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5d1b1958be9ccd2cb88fb3e1429ccbc440797405b4a96bc7c168ef3eae9e238d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5d1b1958be9ccd2cb88fb3e1429ccbc440797405b4a96bc7c168ef3eae9e238d","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-q7sdt_kube-system_f45facf4-987f-4d09-bc27-1f5cd7879216_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.c
ri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f45facf4-987f-4d09-bc27-1f5cd7879216/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f45facf4-987f-4d09-bc27-1f5cd7879216/containers/kube-proxy/4a9cf9fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/f45facf4-987f-4d09-bc27-1f5cd7879216/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/
secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f45facf4-987f-4d09-bc27-1f5cd7879216/volumes/kubernetes.io~projected/kube-api-access-zhvcp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-q7sdt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f45facf4-987f-4d09-bc27-1f5cd7879216","kubernetes.io/config.seen":"2023-10-05T21:51:59.474358829Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a/userdata","rootfs":"/var/lib/containers/storage/overlay/7213a3fcd4df5e3bf042ad218f213d2a6f17c3d8c829e5f1dfe54ded022c2c8e/merged","created":"2023-10-05T21:51:34.497403584Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.
container.hash":"1dae5448","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1dae5448\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-05T21:51:34.298976457Z","io.kubernetes.cri-o.Image":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.2","io.kubernetes.cri-o.ImageRef":"89d57b83c17862d0ca2dd214e
9e5ad425f8d67ecba32d10b846f8d22d3b5597c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-235090\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"41c0005ed7cef3f01ccad667fb5d4c47\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-235090_41c0005ed7cef3f01ccad667fb5d4c47/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7213a3fcd4df5e3bf042ad218f213d2a6f17c3d8c829e5f1dfe54ded022c2c8e/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-235090_kube-system_41c0005ed7cef3f01ccad667fb5d4c47_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/83bc0a0b9c96888a9667713ece5865fa7ace6ce65998b34fa4b5de6dbff987fa/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":
"83bc0a0b9c96888a9667713ece5865fa7ace6ce65998b34fa4b5de6dbff987fa","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-235090_kube-system_41c0005ed7cef3f01ccad667fb5d4c47_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/41c0005ed7cef3f01ccad667fb5d4c47/containers/kube-controller-manager/1fab1458\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/41c0005ed7cef3f01ccad667fb5d4c47/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"
propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-235090","io.kubernetes.pod.namespace":"kube-s
ystem","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"41c0005ed7cef3f01ccad667fb5d4c47","kubernetes.io/config.hash":"41c0005ed7cef3f01ccad667fb5d4c47","kubernetes.io/config.seen":"2023-10-05T21:51:33.640228873Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1005 21:52:16.944409 1559793 cri.go:126] list returned 7 containers
	I1005 21:52:16.944419 1559793 cri.go:129] container: {ID:2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5 Status:stopped}
	I1005 21:52:16.944436 1559793 cri.go:135] skipping {2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5 stopped}: state = "stopped", want "paused"
	I1005 21:52:16.944447 1559793 cri.go:129] container: {ID:4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6 Status:stopped}
	I1005 21:52:16.944454 1559793 cri.go:135] skipping {4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6 stopped}: state = "stopped", want "paused"
	I1005 21:52:16.944460 1559793 cri.go:129] container: {ID:760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806 Status:stopped}
	I1005 21:52:16.944467 1559793 cri.go:135] skipping {760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806 stopped}: state = "stopped", want "paused"
	I1005 21:52:16.944473 1559793 cri.go:129] container: {ID:a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8 Status:stopped}
	I1005 21:52:16.944484 1559793 cri.go:135] skipping {a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8 stopped}: state = "stopped", want "paused"
	I1005 21:52:16.944491 1559793 cri.go:129] container: {ID:ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e Status:stopped}
	I1005 21:52:16.944497 1559793 cri.go:135] skipping {ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e stopped}: state = "stopped", want "paused"
	I1005 21:52:16.944503 1559793 cri.go:129] container: {ID:ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df Status:stopped}
	I1005 21:52:16.944509 1559793 cri.go:135] skipping {ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df stopped}: state = "stopped", want "paused"
	I1005 21:52:16.944515 1559793 cri.go:129] container: {ID:f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a Status:stopped}
	I1005 21:52:16.944522 1559793 cri.go:135] skipping {f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a stopped}: state = "stopped", want "paused"
	I1005 21:52:16.944579 1559793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1005 21:52:16.964793 1559793 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1005 21:52:16.964815 1559793 kubeadm.go:636] restartCluster start
	I1005 21:52:16.964872 1559793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1005 21:52:16.977760 1559793 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:16.978396 1559793 kubeconfig.go:92] found "pause-235090" server: "https://192.168.67.2:8443"
	I1005 21:52:16.979198 1559793 kapi.go:59] client config for pause-235090: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:52:16.980035 1559793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1005 21:52:16.993107 1559793 api_server.go:166] Checking apiserver status ...
	I1005 21:52:16.993172 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 21:52:17.007441 1559793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:17.007503 1559793 api_server.go:166] Checking apiserver status ...
	I1005 21:52:17.007600 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 21:52:17.020662 1559793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:17.521390 1559793 api_server.go:166] Checking apiserver status ...
	I1005 21:52:17.521473 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 21:52:17.534091 1559793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:18.021889 1559793 api_server.go:166] Checking apiserver status ...
	I1005 21:52:18.022044 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 21:52:18.043271 1559793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:18.521446 1559793 api_server.go:166] Checking apiserver status ...
	I1005 21:52:18.521541 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1005 21:52:18.542713 1559793 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:19.021848 1559793 api_server.go:166] Checking apiserver status ...
	I1005 21:52:19.021930 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:52:19.051299 1559793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2779/cgroup
	I1005 21:52:19.083461 1559793 api_server.go:182] apiserver freezer: "6:freezer:/docker/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/crio/crio-bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5"
	I1005 21:52:19.083535 1559793 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/crio/crio-bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5/freezer.state
	I1005 21:52:19.098452 1559793 api_server.go:204] freezer state: "THAWED"
	I1005 21:52:19.098481 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:24.098847 1559793 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1005 21:52:24.098892 1559793 retry.go:31] will retry after 216.749144ms: state is "Stopped"
	I1005 21:52:24.315757 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:29.317439 1559793 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1005 21:52:29.317483 1559793 retry.go:31] will retry after 251.39575ms: state is "Stopped"
	I1005 21:52:29.569984 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:32.147874 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1005 21:52:32.147911 1559793 retry.go:31] will retry after 366.822237ms: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1005 21:52:32.516125 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:32.568099 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:52:32.568133 1559793 retry.go:31] will retry after 382.462342ms: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:52:32.951398 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:32.975956 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:52:32.976030 1559793 retry.go:31] will retry after 698.21202ms: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:52:33.674949 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:33.698395 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:52:33.698437 1559793 kubeadm.go:611] needs reconfigure: apiserver error: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:52:33.698446 1559793 kubeadm.go:1128] stopping kube-system containers ...
	I1005 21:52:33.698457 1559793 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1005 21:52:33.698519 1559793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1005 21:52:33.842420 1559793 cri.go:89] found id: "614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4"
	I1005 21:52:33.842442 1559793 cri.go:89] found id: "dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796"
	I1005 21:52:33.842448 1559793 cri.go:89] found id: "6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d"
	I1005 21:52:33.842453 1559793 cri.go:89] found id: "6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44"
	I1005 21:52:33.842457 1559793 cri.go:89] found id: "bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0"
	I1005 21:52:33.842462 1559793 cri.go:89] found id: "b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89"
	I1005 21:52:33.842466 1559793 cri.go:89] found id: "bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5"
	I1005 21:52:33.842471 1559793 cri.go:89] found id: "ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e"
	I1005 21:52:33.842475 1559793 cri.go:89] found id: "2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5"
	I1005 21:52:33.842482 1559793 cri.go:89] found id: "ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df"
	I1005 21:52:33.842487 1559793 cri.go:89] found id: "a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8"
	I1005 21:52:33.842491 1559793 cri.go:89] found id: "f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a"
	I1005 21:52:33.842496 1559793 cri.go:89] found id: "760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806"
	I1005 21:52:33.842503 1559793 cri.go:89] found id: "4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6"
	I1005 21:52:33.842508 1559793 cri.go:89] found id: ""
	I1005 21:52:33.842513 1559793 cri.go:234] Stopping containers: [614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4 dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796 6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d 6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44 bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0 b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89 bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5 ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e 2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5 ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8 f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a 760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806 4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6]
	I1005 21:52:33.842568 1559793 ssh_runner.go:195] Run: which crictl
	I1005 21:52:33.861809 1559793 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4 dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796 6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d 6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44 bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0 b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89 bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5 ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e 2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5 ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8 f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a 760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806 4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6
	I1005 21:52:50.156611 1559793 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4 dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796 6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d 6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44 bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0 b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89 bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5 ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e 2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5 ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8 f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a 760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806 4838809495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6: (16.2
94756395s)
	W1005 21:52:50.156676 1559793 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4 dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796 6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d 6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44 bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0 b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89 bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5 ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e 2f34d16946754a5a88953323742a3f599420f5d2a87bd2553436d5d36ae8cdb5 ef04be5e80e82727e935447961b73ecaf97ae58a5889e53f55799d26a28979df a957a535cff0a89f698ceafcf780c7e7aa23edbcdd8254e4a7dc0e06fc09d3a8 f59b150461129b9a81ab7b49490157fc15314f7b93fc1365afb5b325666cae7a 760e356ac45fc228eebc7bbf4136c576558594ad8651f820ba16e5c64d85f806 483880
9495a59414492037aded28f1d21b84445704e592b22b83988d4d2ebbd6: Process exited with status 1
	stdout:
	614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4
	dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796
	6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d
	6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44
	bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0
	b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89
	bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5
	
	stderr:
	E1005 21:52:50.153034    3203 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e\": container with ID starting with ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e not found: ID does not exist" containerID="ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e"
	time="2023-10-05T21:52:50Z" level=fatal msg="stopping the container \"ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e\": rpc error: code = NotFound desc = could not find container \"ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e\": container with ID starting with ee5aa53115fd19e494d86164019c8dc385b6713243ad971f56263e8ddc566e6e not found: ID does not exist"
	I1005 21:52:50.156736 1559793 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1005 21:52:50.261133 1559793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1005 21:52:50.273180 1559793 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct  5 21:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct  5 21:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct  5 21:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct  5 21:51 /etc/kubernetes/scheduler.conf
	
	I1005 21:52:50.273245 1559793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1005 21:52:50.284904 1559793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1005 21:52:50.296097 1559793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1005 21:52:50.307251 1559793 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:50.307312 1559793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1005 21:52:50.318670 1559793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1005 21:52:50.330815 1559793 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1005 21:52:50.330887 1559793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1005 21:52:50.341917 1559793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1005 21:52:50.353453 1559793 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1005 21:52:50.353477 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 21:52:50.429887 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 21:52:52.699556 1559793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.269633026s)
	I1005 21:52:52.699589 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1005 21:52:52.970498 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 21:52:53.150180 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1005 21:52:53.427021 1559793 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:52:53.427090 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:52:53.461788 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:52:54.017497 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:52:54.516927 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:52:54.560122 1559793 api_server.go:72] duration metric: took 1.13310025s to wait for apiserver process to appear ...
	I1005 21:52:54.560144 1559793 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:52:54.560164 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:54.560440 1559793 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 21:52:54.560460 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:52:54.560603 1559793 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1005 21:52:55.061616 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:00.061986 1559793 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1005 21:53:00.062026 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:01.758844 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1005 21:53:01.758873 1559793 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1005 21:53:01.758885 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:01.866884 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1005 21:53:01.866917 1559793 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1005 21:53:02.061291 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:02.081985 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 21:53:02.082081 1559793 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:53:02.560718 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:02.577513 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 21:53:02.577539 1559793 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:53:03.060721 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:03.076629 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1005 21:53:03.076718 1559793 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1005 21:53:03.560742 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:03.571103 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 21:53:03.588231 1559793 api_server.go:141] control plane version: v1.28.2
	I1005 21:53:03.588260 1559793 api_server.go:131] duration metric: took 9.028109462s to wait for apiserver health ...
	I1005 21:53:03.588270 1559793 cni.go:84] Creating CNI manager for ""
	I1005 21:53:03.588277 1559793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:53:03.590999 1559793 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1005 21:53:03.593417 1559793 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1005 21:53:03.599758 1559793 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1005 21:53:03.599803 1559793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1005 21:53:03.637377 1559793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1005 21:53:04.734446 1559793 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.097033827s)
	I1005 21:53:04.734479 1559793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:53:04.743028 1559793 system_pods.go:59] 7 kube-system pods found
	I1005 21:53:04.743070 1559793 system_pods.go:61] "coredns-5dd5756b68-84s28" [f9362fc7-f2d0-411f-a717-fa70ffafabcb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1005 21:53:04.743084 1559793 system_pods.go:61] "etcd-pause-235090" [5f6212e1-25e9-4349-a0b6-57713e56575c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1005 21:53:04.743090 1559793 system_pods.go:61] "kindnet-ntfxs" [d6f70b29-95e2-4894-95d2-97463d8af989] Running
	I1005 21:53:04.743097 1559793 system_pods.go:61] "kube-apiserver-pause-235090" [d8f7fb78-a561-4232-ae85-e644e98215ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1005 21:53:04.743110 1559793 system_pods.go:61] "kube-controller-manager-pause-235090" [979b7651-6394-4b72-8be7-a48d6daa7cd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1005 21:53:04.743118 1559793 system_pods.go:61] "kube-proxy-q7sdt" [f45facf4-987f-4d09-bc27-1f5cd7879216] Running
	I1005 21:53:04.743126 1559793 system_pods.go:61] "kube-scheduler-pause-235090" [eb7cda73-019e-4978-822f-4439a1907bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1005 21:53:04.743138 1559793 system_pods.go:74] duration metric: took 8.651492ms to wait for pod list to return data ...
	I1005 21:53:04.743146 1559793 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:53:04.746579 1559793 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:53:04.746612 1559793 node_conditions.go:123] node cpu capacity is 2
	I1005 21:53:04.746635 1559793 node_conditions.go:105] duration metric: took 3.480217ms to run NodePressure ...
	I1005 21:53:04.746654 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1005 21:53:05.033243 1559793 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1005 21:53:05.038791 1559793 kubeadm.go:787] kubelet initialised
	I1005 21:53:05.038817 1559793 kubeadm.go:788] duration metric: took 5.549498ms waiting for restarted kubelet to initialise ...
	I1005 21:53:05.038827 1559793 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:05.045777 1559793 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:07.068864 1559793 pod_ready.go:102] pod "coredns-5dd5756b68-84s28" in "kube-system" namespace has status "Ready":"False"
	I1005 21:53:08.569574 1559793 pod_ready.go:92] pod "coredns-5dd5756b68-84s28" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:08.569610 1559793 pod_ready.go:81] duration metric: took 3.523800725s waiting for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:08.569622 1559793 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:10.591294 1559793 pod_ready.go:102] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"False"
	I1005 21:53:12.591330 1559793 pod_ready.go:102] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"False"
	I1005 21:53:14.108080 1559793 pod_ready.go:92] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.108149 1559793 pod_ready.go:81] duration metric: took 5.538517474s waiting for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.108176 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.120480 1559793 pod_ready.go:92] pod "kube-apiserver-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.120540 1559793 pod_ready.go:81] duration metric: took 12.214096ms waiting for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.120575 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.133118 1559793 pod_ready.go:92] pod "kube-controller-manager-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.133187 1559793 pod_ready.go:81] duration metric: took 12.590317ms waiting for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.133214 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.144134 1559793 pod_ready.go:92] pod "kube-proxy-q7sdt" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.144205 1559793 pod_ready.go:81] duration metric: took 10.969224ms waiting for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.144231 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.153585 1559793 pod_ready.go:92] pod "kube-scheduler-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.153605 1559793 pod_ready.go:81] duration metric: took 9.354178ms waiting for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.153615 1559793 pod_ready.go:38] duration metric: took 9.114777614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:14.153632 1559793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:53:14.170452 1559793 ops.go:34] apiserver oom_adj: -16
	I1005 21:53:14.170472 1559793 kubeadm.go:640] restartCluster took 57.20564951s
	I1005 21:53:14.170482 1559793 kubeadm.go:406] StartCluster complete in 57.328321693s
	I1005 21:53:14.170498 1559793 settings.go:142] acquiring lock: {Name:mk7dada861cf2ca4f44d224c602a8425f2d31baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:14.170563 1559793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:53:14.171226 1559793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/kubeconfig: {Name:mkcdb0cb77435bcc2d7e177116f1a594e64ff454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:14.171969 1559793 kapi.go:59] client config for pause-235090: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:53:14.172413 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:53:14.172747 1559793 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:53:14.172784 1559793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 21:53:14.176295 1559793 out.go:177] * Enabled addons: 
	I1005 21:53:14.178016 1559793 addons.go:502] enable addons completed in 5.226528ms: enabled=[]
	I1005 21:53:14.184006 1559793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-235090" context rescaled to 1 replicas
	I1005 21:53:14.184044 1559793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:53:14.185870 1559793 out.go:177] * Verifying Kubernetes components...
	I1005 21:53:14.187640 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:53:14.382467 1559793 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1005 21:53:14.382461 1559793 node_ready.go:35] waiting up to 6m0s for node "pause-235090" to be "Ready" ...
	I1005 21:53:14.388106 1559793 node_ready.go:49] node "pause-235090" has status "Ready":"True"
	I1005 21:53:14.388130 1559793 node_ready.go:38] duration metric: took 5.574639ms waiting for node "pause-235090" to be "Ready" ...
	I1005 21:53:14.388141 1559793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:14.496213 1559793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.889027 1559793 pod_ready.go:92] pod "coredns-5dd5756b68-84s28" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.889048 1559793 pod_ready.go:81] duration metric: took 392.808412ms waiting for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.889061 1559793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.288189 1559793 pod_ready.go:92] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:15.288221 1559793 pod_ready.go:81] duration metric: took 399.152165ms waiting for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.288255 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.689154 1559793 pod_ready.go:92] pod "kube-apiserver-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:15.689181 1559793 pod_ready.go:81] duration metric: took 400.912499ms waiting for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.689197 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.088471 1559793 pod_ready.go:92] pod "kube-controller-manager-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.088511 1559793 pod_ready.go:81] duration metric: took 399.305626ms waiting for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.088550 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.488283 1559793 pod_ready.go:92] pod "kube-proxy-q7sdt" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.488310 1559793 pod_ready.go:81] duration metric: took 399.749752ms waiting for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.488322 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.888834 1559793 pod_ready.go:92] pod "kube-scheduler-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.888857 1559793 pod_ready.go:81] duration metric: took 400.527006ms waiting for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.888866 1559793 pod_ready.go:38] duration metric: took 2.500715514s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:16.888884 1559793 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:53:16.888962 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:53:16.907528 1559793 api_server.go:72] duration metric: took 2.723451889s to wait for apiserver process to appear ...
	I1005 21:53:16.907553 1559793 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:53:16.907570 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:16.918166 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 21:53:16.919632 1559793 api_server.go:141] control plane version: v1.28.2
	I1005 21:53:16.919653 1559793 api_server.go:131] duration metric: took 12.093841ms to wait for apiserver health ...
	I1005 21:53:16.919661 1559793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:53:17.090937 1559793 system_pods.go:59] 7 kube-system pods found
	I1005 21:53:17.091042 1559793 system_pods.go:61] "coredns-5dd5756b68-84s28" [f9362fc7-f2d0-411f-a717-fa70ffafabcb] Running
	I1005 21:53:17.091065 1559793 system_pods.go:61] "etcd-pause-235090" [5f6212e1-25e9-4349-a0b6-57713e56575c] Running
	I1005 21:53:17.091097 1559793 system_pods.go:61] "kindnet-ntfxs" [d6f70b29-95e2-4894-95d2-97463d8af989] Running
	I1005 21:53:17.091126 1559793 system_pods.go:61] "kube-apiserver-pause-235090" [d8f7fb78-a561-4232-ae85-e644e98215ac] Running
	I1005 21:53:17.091147 1559793 system_pods.go:61] "kube-controller-manager-pause-235090" [979b7651-6394-4b72-8be7-a48d6daa7cd6] Running
	I1005 21:53:17.091176 1559793 system_pods.go:61] "kube-proxy-q7sdt" [f45facf4-987f-4d09-bc27-1f5cd7879216] Running
	I1005 21:53:17.091195 1559793 system_pods.go:61] "kube-scheduler-pause-235090" [eb7cda73-019e-4978-822f-4439a1907bc1] Running
	I1005 21:53:17.091212 1559793 system_pods.go:74] duration metric: took 171.544601ms to wait for pod list to return data ...
	I1005 21:53:17.091252 1559793 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:53:17.290673 1559793 default_sa.go:45] found service account: "default"
	I1005 21:53:17.290699 1559793 default_sa.go:55] duration metric: took 199.427869ms for default service account to be created ...
	I1005 21:53:17.290711 1559793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:53:17.492133 1559793 system_pods.go:86] 7 kube-system pods found
	I1005 21:53:17.492162 1559793 system_pods.go:89] "coredns-5dd5756b68-84s28" [f9362fc7-f2d0-411f-a717-fa70ffafabcb] Running
	I1005 21:53:17.492170 1559793 system_pods.go:89] "etcd-pause-235090" [5f6212e1-25e9-4349-a0b6-57713e56575c] Running
	I1005 21:53:17.492176 1559793 system_pods.go:89] "kindnet-ntfxs" [d6f70b29-95e2-4894-95d2-97463d8af989] Running
	I1005 21:53:17.492181 1559793 system_pods.go:89] "kube-apiserver-pause-235090" [d8f7fb78-a561-4232-ae85-e644e98215ac] Running
	I1005 21:53:17.492209 1559793 system_pods.go:89] "kube-controller-manager-pause-235090" [979b7651-6394-4b72-8be7-a48d6daa7cd6] Running
	I1005 21:53:17.492215 1559793 system_pods.go:89] "kube-proxy-q7sdt" [f45facf4-987f-4d09-bc27-1f5cd7879216] Running
	I1005 21:53:17.492226 1559793 system_pods.go:89] "kube-scheduler-pause-235090" [eb7cda73-019e-4978-822f-4439a1907bc1] Running
	I1005 21:53:17.492234 1559793 system_pods.go:126] duration metric: took 201.516793ms to wait for k8s-apps to be running ...
	I1005 21:53:17.492248 1559793 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:53:17.492353 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:53:17.521660 1559793 system_svc.go:56] duration metric: took 29.387989ms WaitForService to wait for kubelet.
	I1005 21:53:17.521685 1559793 kubeadm.go:581] duration metric: took 3.337617493s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:53:17.521705 1559793 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:53:17.688332 1559793 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:53:17.688360 1559793 node_conditions.go:123] node cpu capacity is 2
	I1005 21:53:17.688370 1559793 node_conditions.go:105] duration metric: took 166.659808ms to run NodePressure ...
	I1005 21:53:17.688382 1559793 start.go:228] waiting for startup goroutines ...
	I1005 21:53:17.688388 1559793 start.go:233] waiting for cluster config update ...
	I1005 21:53:17.688395 1559793 start.go:242] writing updated cluster config ...
	I1005 21:53:17.688916 1559793 ssh_runner.go:195] Run: rm -f paused
	I1005 21:53:17.828573 1559793 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 21:53:17.831375 1559793 out.go:177] * Done! kubectl is now configured to use "pause-235090" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-235090
helpers_test.go:235: (dbg) docker inspect pause-235090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a",
	        "Created": "2023-10-05T21:51:15.825802101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1553773,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:51:16.367485147Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/hosts",
	        "LogPath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a-json.log",
	        "Name": "/pause-235090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-235090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-235090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c-init/diff:/var/lib/docker/overlay2/d90b9e2f667f252141d832d5a382f20f93e3e59a1248437095891beeaafeffd3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-235090",
	                "Source": "/var/lib/docker/volumes/pause-235090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-235090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-235090",
	                "name.minikube.sigs.k8s.io": "pause-235090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c94af4d8234b208cb7ff001f333803da92f415c7111320ee01e963d67806d99e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34222"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34220"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c94af4d8234b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-235090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "fe18557f83a9",
	                        "pause-235090"
	                    ],
	                    "NetworkID": "a38500a611f613a10c557da5d0aa104bcdd8797878e4db0b60837f68e8afd5da",
	                    "EndpointID": "11b0d0ae216441da70941560a68e1a447561747f0acc343cb250fb6050362693",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-235090 -n pause-235090
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-235090 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-235090 logs -n 25: (2.210733011s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo docker                         | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo find                           | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo crio                           | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-798214                                     | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC | 05 Oct 23 21:52 UTC |
	| start   | -p force-systemd-env-782488                          | force-systemd-env-782488  | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC | 05 Oct 23 21:53 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-782488                          | force-systemd-env-782488  | jenkins | v1.31.2 | 05 Oct 23 21:53 UTC | 05 Oct 23 21:53 UTC |
	| start   | -p force-systemd-flag-591577                         | force-systemd-flag-591577 | jenkins | v1.31.2 | 05 Oct 23 21:53 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:53:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:53:11.183987 1567019 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:53:11.184202 1567019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:53:11.184230 1567019 out.go:309] Setting ErrFile to fd 2...
	I1005 21:53:11.184251 1567019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:53:11.184614 1567019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:53:11.185106 1567019 out.go:303] Setting JSON to false
	I1005 21:53:11.186446 1567019 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27339,"bootTime":1696515453,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:53:11.186523 1567019 start.go:138] virtualization:  
	I1005 21:53:11.190893 1567019 out.go:177] * [force-systemd-flag-591577] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:53:11.192756 1567019 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:53:11.194280 1567019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:53:11.193004 1567019 notify.go:220] Checking for updates...
	I1005 21:53:11.198607 1567019 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:53:11.200534 1567019 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:53:11.202188 1567019 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:53:11.203894 1567019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:53:11.206336 1567019 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:53:11.206486 1567019 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:53:11.231659 1567019 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:53:11.231761 1567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:53:11.328008 1567019 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:53:11.317859953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:53:11.328115 1567019 docker.go:294] overlay module found
	I1005 21:53:11.330227 1567019 out.go:177] * Using the docker driver based on user configuration
	I1005 21:53:11.332064 1567019 start.go:298] selected driver: docker
	I1005 21:53:11.332082 1567019 start.go:902] validating driver "docker" against <nil>
	I1005 21:53:11.332096 1567019 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:53:11.332755 1567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:53:11.404467 1567019 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:53:11.393856214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:53:11.404636 1567019 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:53:11.404854 1567019 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1005 21:53:11.406777 1567019 out.go:177] * Using Docker driver with root privileges
	I1005 21:53:11.408835 1567019 cni.go:84] Creating CNI manager for ""
	I1005 21:53:11.408864 1567019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:53:11.408877 1567019 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:53:11.408894 1567019 start_flags.go:321] config:
	{Name:force-systemd-flag-591577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-591577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:53:11.411344 1567019 out.go:177] * Starting control plane node force-systemd-flag-591577 in cluster force-systemd-flag-591577
	I1005 21:53:11.413323 1567019 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:53:11.415333 1567019 out.go:177] * Pulling base image ...
	I1005 21:53:11.417148 1567019 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:53:11.417182 1567019 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:53:11.417201 1567019 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1005 21:53:11.417212 1567019 cache.go:57] Caching tarball of preloaded images
	I1005 21:53:11.417297 1567019 preload.go:174] Found /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1005 21:53:11.417307 1567019 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 21:53:11.417467 1567019 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/force-systemd-flag-591577/config.json ...
	I1005 21:53:11.417499 1567019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/force-systemd-flag-591577/config.json: {Name:mkf3d35cf33f196ae8c4bd78be14b4517e616eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:11.441456 1567019 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 21:53:11.441478 1567019 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 21:53:11.441499 1567019 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:53:11.441529 1567019 start.go:365] acquiring machines lock for force-systemd-flag-591577: {Name:mk664635e434a5a923a238d051a37faacfe7887a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:53:11.441715 1567019 start.go:369] acquired machines lock for "force-systemd-flag-591577" in 158.958µs
	I1005 21:53:11.441744 1567019 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-591577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-591577 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:53:11.441820 1567019 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:53:10.591294 1559793 pod_ready.go:102] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"False"
	I1005 21:53:12.591330 1559793 pod_ready.go:102] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"False"
	I1005 21:53:14.108080 1559793 pod_ready.go:92] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.108149 1559793 pod_ready.go:81] duration metric: took 5.538517474s waiting for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.108176 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.120480 1559793 pod_ready.go:92] pod "kube-apiserver-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.120540 1559793 pod_ready.go:81] duration metric: took 12.214096ms waiting for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.120575 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.133118 1559793 pod_ready.go:92] pod "kube-controller-manager-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.133187 1559793 pod_ready.go:81] duration metric: took 12.590317ms waiting for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.133214 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.144134 1559793 pod_ready.go:92] pod "kube-proxy-q7sdt" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.144205 1559793 pod_ready.go:81] duration metric: took 10.969224ms waiting for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.144231 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.153585 1559793 pod_ready.go:92] pod "kube-scheduler-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.153605 1559793 pod_ready.go:81] duration metric: took 9.354178ms waiting for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.153615 1559793 pod_ready.go:38] duration metric: took 9.114777614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:14.153632 1559793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:53:14.170452 1559793 ops.go:34] apiserver oom_adj: -16
	I1005 21:53:14.170472 1559793 kubeadm.go:640] restartCluster took 57.20564951s
	I1005 21:53:14.170482 1559793 kubeadm.go:406] StartCluster complete in 57.328321693s
	I1005 21:53:14.170498 1559793 settings.go:142] acquiring lock: {Name:mk7dada861cf2ca4f44d224c602a8425f2d31baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:14.170563 1559793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:53:14.171226 1559793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/kubeconfig: {Name:mkcdb0cb77435bcc2d7e177116f1a594e64ff454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:14.171969 1559793 kapi.go:59] client config for pause-235090: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:53:14.172413 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:53:14.172747 1559793 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:53:14.172784 1559793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 21:53:14.176295 1559793 out.go:177] * Enabled addons: 
	I1005 21:53:14.178016 1559793 addons.go:502] enable addons completed in 5.226528ms: enabled=[]
	I1005 21:53:14.184006 1559793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-235090" context rescaled to 1 replicas
	I1005 21:53:14.184044 1559793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:53:14.185870 1559793 out.go:177] * Verifying Kubernetes components...
	I1005 21:53:14.187640 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:53:14.382467 1559793 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1005 21:53:14.382461 1559793 node_ready.go:35] waiting up to 6m0s for node "pause-235090" to be "Ready" ...
	I1005 21:53:14.388106 1559793 node_ready.go:49] node "pause-235090" has status "Ready":"True"
	I1005 21:53:14.388130 1559793 node_ready.go:38] duration metric: took 5.574639ms waiting for node "pause-235090" to be "Ready" ...
	I1005 21:53:14.388141 1559793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:14.496213 1559793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:11.444395 1567019 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1005 21:53:11.444664 1567019 start.go:159] libmachine.API.Create for "force-systemd-flag-591577" (driver="docker")
	I1005 21:53:11.444692 1567019 client.go:168] LocalClient.Create starting
	I1005 21:53:11.444777 1567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem
	I1005 21:53:11.444816 1567019 main.go:141] libmachine: Decoding PEM data...
	I1005 21:53:11.444830 1567019 main.go:141] libmachine: Parsing certificate...
	I1005 21:53:11.444899 1567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem
	I1005 21:53:11.444917 1567019 main.go:141] libmachine: Decoding PEM data...
	I1005 21:53:11.444927 1567019 main.go:141] libmachine: Parsing certificate...
	I1005 21:53:11.445299 1567019 cli_runner.go:164] Run: docker network inspect force-systemd-flag-591577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:53:11.472105 1567019 cli_runner.go:211] docker network inspect force-systemd-flag-591577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:53:11.472191 1567019 network_create.go:281] running [docker network inspect force-systemd-flag-591577] to gather additional debugging logs...
	I1005 21:53:11.472207 1567019 cli_runner.go:164] Run: docker network inspect force-systemd-flag-591577
	W1005 21:53:11.498433 1567019 cli_runner.go:211] docker network inspect force-systemd-flag-591577 returned with exit code 1
	I1005 21:53:11.498464 1567019 network_create.go:284] error running [docker network inspect force-systemd-flag-591577]: docker network inspect force-systemd-flag-591577: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-591577 not found
	I1005 21:53:11.498485 1567019 network_create.go:286] output of [docker network inspect force-systemd-flag-591577]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-591577 not found
	
	** /stderr **
	I1005 21:53:11.498822 1567019 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:53:11.518818 1567019 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d16b9e9a692c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:9e:45:13} reservation:<nil>}
	I1005 21:53:11.519202 1567019 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f25a4bc44290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:89:8c:51:03} reservation:<nil>}
	I1005 21:53:11.519533 1567019 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38500a611f6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:75:ea:d6:d9} reservation:<nil>}
	I1005 21:53:11.520001 1567019 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001213170}
	I1005 21:53:11.520026 1567019 network_create.go:124] attempt to create docker network force-systemd-flag-591577 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1005 21:53:11.520090 1567019 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-591577 force-systemd-flag-591577
	I1005 21:53:11.599587 1567019 network_create.go:108] docker network force-systemd-flag-591577 192.168.76.0/24 created
	I1005 21:53:11.599620 1567019 kic.go:117] calculated static IP "192.168.76.2" for the "force-systemd-flag-591577" container
	I1005 21:53:11.599691 1567019 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:53:11.617676 1567019 cli_runner.go:164] Run: docker volume create force-systemd-flag-591577 --label name.minikube.sigs.k8s.io=force-systemd-flag-591577 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:53:11.636614 1567019 oci.go:103] Successfully created a docker volume force-systemd-flag-591577
	I1005 21:53:11.636717 1567019 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-591577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-591577 --entrypoint /usr/bin/test -v force-systemd-flag-591577:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:53:12.293040 1567019 oci.go:107] Successfully prepared a docker volume force-systemd-flag-591577
	I1005 21:53:12.293088 1567019 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:53:12.293108 1567019 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:53:12.293202 1567019 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-591577:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:53:14.889027 1559793 pod_ready.go:92] pod "coredns-5dd5756b68-84s28" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.889048 1559793 pod_ready.go:81] duration metric: took 392.808412ms waiting for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.889061 1559793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.288189 1559793 pod_ready.go:92] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:15.288221 1559793 pod_ready.go:81] duration metric: took 399.152165ms waiting for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.288255 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.689154 1559793 pod_ready.go:92] pod "kube-apiserver-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:15.689181 1559793 pod_ready.go:81] duration metric: took 400.912499ms waiting for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.689197 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.088471 1559793 pod_ready.go:92] pod "kube-controller-manager-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.088511 1559793 pod_ready.go:81] duration metric: took 399.305626ms waiting for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.088550 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.488283 1559793 pod_ready.go:92] pod "kube-proxy-q7sdt" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.488310 1559793 pod_ready.go:81] duration metric: took 399.749752ms waiting for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.488322 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.888834 1559793 pod_ready.go:92] pod "kube-scheduler-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.888857 1559793 pod_ready.go:81] duration metric: took 400.527006ms waiting for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.888866 1559793 pod_ready.go:38] duration metric: took 2.500715514s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:16.888884 1559793 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:53:16.888962 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:53:16.907528 1559793 api_server.go:72] duration metric: took 2.723451889s to wait for apiserver process to appear ...
	I1005 21:53:16.907553 1559793 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:53:16.907570 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:16.918166 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 21:53:16.919632 1559793 api_server.go:141] control plane version: v1.28.2
	I1005 21:53:16.919653 1559793 api_server.go:131] duration metric: took 12.093841ms to wait for apiserver health ...
	I1005 21:53:16.919661 1559793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:53:17.090937 1559793 system_pods.go:59] 7 kube-system pods found
	I1005 21:53:17.091042 1559793 system_pods.go:61] "coredns-5dd5756b68-84s28" [f9362fc7-f2d0-411f-a717-fa70ffafabcb] Running
	I1005 21:53:17.091065 1559793 system_pods.go:61] "etcd-pause-235090" [5f6212e1-25e9-4349-a0b6-57713e56575c] Running
	I1005 21:53:17.091097 1559793 system_pods.go:61] "kindnet-ntfxs" [d6f70b29-95e2-4894-95d2-97463d8af989] Running
	I1005 21:53:17.091126 1559793 system_pods.go:61] "kube-apiserver-pause-235090" [d8f7fb78-a561-4232-ae85-e644e98215ac] Running
	I1005 21:53:17.091147 1559793 system_pods.go:61] "kube-controller-manager-pause-235090" [979b7651-6394-4b72-8be7-a48d6daa7cd6] Running
	I1005 21:53:17.091176 1559793 system_pods.go:61] "kube-proxy-q7sdt" [f45facf4-987f-4d09-bc27-1f5cd7879216] Running
	I1005 21:53:17.091195 1559793 system_pods.go:61] "kube-scheduler-pause-235090" [eb7cda73-019e-4978-822f-4439a1907bc1] Running
	I1005 21:53:17.091212 1559793 system_pods.go:74] duration metric: took 171.544601ms to wait for pod list to return data ...
	I1005 21:53:17.091252 1559793 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:53:17.290673 1559793 default_sa.go:45] found service account: "default"
	I1005 21:53:17.290699 1559793 default_sa.go:55] duration metric: took 199.427869ms for default service account to be created ...
	I1005 21:53:17.290711 1559793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:53:17.492133 1559793 system_pods.go:86] 7 kube-system pods found
	I1005 21:53:17.492162 1559793 system_pods.go:89] "coredns-5dd5756b68-84s28" [f9362fc7-f2d0-411f-a717-fa70ffafabcb] Running
	I1005 21:53:17.492170 1559793 system_pods.go:89] "etcd-pause-235090" [5f6212e1-25e9-4349-a0b6-57713e56575c] Running
	I1005 21:53:17.492176 1559793 system_pods.go:89] "kindnet-ntfxs" [d6f70b29-95e2-4894-95d2-97463d8af989] Running
	I1005 21:53:17.492181 1559793 system_pods.go:89] "kube-apiserver-pause-235090" [d8f7fb78-a561-4232-ae85-e644e98215ac] Running
	I1005 21:53:17.492209 1559793 system_pods.go:89] "kube-controller-manager-pause-235090" [979b7651-6394-4b72-8be7-a48d6daa7cd6] Running
	I1005 21:53:17.492215 1559793 system_pods.go:89] "kube-proxy-q7sdt" [f45facf4-987f-4d09-bc27-1f5cd7879216] Running
	I1005 21:53:17.492226 1559793 system_pods.go:89] "kube-scheduler-pause-235090" [eb7cda73-019e-4978-822f-4439a1907bc1] Running
	I1005 21:53:17.492234 1559793 system_pods.go:126] duration metric: took 201.516793ms to wait for k8s-apps to be running ...
	I1005 21:53:17.492248 1559793 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:53:17.492353 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:53:17.521660 1559793 system_svc.go:56] duration metric: took 29.387989ms WaitForService to wait for kubelet.
	I1005 21:53:17.521685 1559793 kubeadm.go:581] duration metric: took 3.337617493s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:53:17.521705 1559793 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:53:17.688332 1559793 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:53:17.688360 1559793 node_conditions.go:123] node cpu capacity is 2
	I1005 21:53:17.688370 1559793 node_conditions.go:105] duration metric: took 166.659808ms to run NodePressure ...
	I1005 21:53:17.688382 1559793 start.go:228] waiting for startup goroutines ...
	I1005 21:53:17.688388 1559793 start.go:233] waiting for cluster config update ...
	I1005 21:53:17.688395 1559793 start.go:242] writing updated cluster config ...
	I1005 21:53:17.688916 1559793 ssh_runner.go:195] Run: rm -f paused
	I1005 21:53:17.828573 1559793 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 21:53:17.831375 1559793 out.go:177] * Done! kubectl is now configured to use "pause-235090" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.454669707Z" level=info msg="Creating container: kube-system/kindnet-ntfxs/kindnet-cni" id=c65c0f3c-7ec5-49f6-ab26-6100ff57992a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.454724361Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.488301579Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3ea67d8620f5a8b6787e3045994419f734f90704ed707865c5d1f369101f62e0/merged/etc/passwd: no such file or directory"
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.488768310Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3ea67d8620f5a8b6787e3045994419f734f90704ed707865c5d1f369101f62e0/merged/etc/group: no such file or directory"
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.741779994Z" level=info msg="Created container 6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6: kube-system/coredns-5dd5756b68-84s28/coredns" id=4f085f2f-ff69-46b0-877b-208cf76ec40d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.742655144Z" level=info msg="Starting container: 6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6" id=43d74176-444f-415a-9e4d-4c88e83f5202 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.767415061Z" level=info msg="Started container" PID=3772 containerID=6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6 description=kube-system/coredns-5dd5756b68-84s28/coredns id=43d74176-444f-415a-9e4d-4c88e83f5202 name=/runtime.v1.RuntimeService/StartContainer sandboxID=932929e3687b0a33fadf97d4460547523107972997c6f040e3e0cac5fef44c70
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.815764148Z" level=info msg="Created container 997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952: kube-system/kindnet-ntfxs/kindnet-cni" id=c65c0f3c-7ec5-49f6-ab26-6100ff57992a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.821937227Z" level=info msg="Starting container: 997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952" id=02d81017-e5ab-4aef-b8fc-75ec0176551d name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.826359276Z" level=info msg="Created container 9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37: kube-system/kube-proxy-q7sdt/kube-proxy" id=6de4b439-80cc-4631-a4af-80cb79cb8889 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.829872084Z" level=info msg="Starting container: 9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37" id=3e5ca454-9e9c-4898-bd77-d99fe7da3145 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.857908205Z" level=info msg="Started container" PID=3789 containerID=997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952 description=kube-system/kindnet-ntfxs/kindnet-cni id=02d81017-e5ab-4aef-b8fc-75ec0176551d name=/runtime.v1.RuntimeService/StartContainer sandboxID=600beb5cc0b7b20340c07b218da8184f72bffa203f2a845790c78c374be4adab
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.859604220Z" level=info msg="Started container" PID=3782 containerID=9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37 description=kube-system/kube-proxy-q7sdt/kube-proxy id=3e5ca454-9e9c-4898-bd77-d99fe7da3145 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5d1b1958be9ccd2cb88fb3e1429ccbc440797405b4a96bc7c168ef3eae9e238d
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.339044666Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.363306064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.363349650Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.363370967Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.387902585Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.387946064Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.387967979Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.396633436Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.396681543Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.396698593Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.416056811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.416092848Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	997042ab3c1a2       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   17 seconds ago       Running             kindnet-cni               3                   600beb5cc0b7b       kindnet-ntfxs
	9359faa106d29       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   17 seconds ago       Running             kube-proxy                3                   5d1b1958be9cc       kube-proxy-q7sdt
	6b1ed923e252e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   17 seconds ago       Running             coredns                   2                   932929e3687b0       coredns-5dd5756b68-84s28
	aecb8aedf6505       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   25 seconds ago       Running             kube-controller-manager   2                   83bc0a0b9c968       kube-controller-manager-pause-235090
	33cacc9697a38       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   25 seconds ago       Running             kube-apiserver            2                   51f7a2b5b4427       kube-apiserver-pause-235090
	64a528443baf7       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   25 seconds ago       Running             kube-scheduler            2                   c820472ec9b8a       kube-scheduler-pause-235090
	6353ff01ab7d3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   25 seconds ago       Running             etcd                      3                   6da149e9221cd       etcd-pause-235090
	614251a774215       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   48 seconds ago       Exited              kube-proxy                2                   5d1b1958be9cc       kube-proxy-q7sdt
	dcf8d0b0bd636       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   51 seconds ago       Exited              kindnet-cni               2                   600beb5cc0b7b       kindnet-ntfxs
	6e63f3937b88b       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   51 seconds ago       Exited              kube-controller-manager   1                   83bc0a0b9c968       kube-controller-manager-pause-235090
	6837ad1a1228a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   54 seconds ago       Exited              etcd                      2                   6da149e9221cd       etcd-pause-235090
	bf2ac0efa83bc       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   57 seconds ago       Exited              kube-scheduler            1                   c820472ec9b8a       kube-scheduler-pause-235090
	b19ba02b4f418       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   57 seconds ago       Exited              coredns                   1                   932929e3687b0       coredns-5dd5756b68-84s28
	bec04f1405f80       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   About a minute ago   Exited              kube-apiserver            1                   51f7a2b5b4427       kube-apiserver-pause-235090
	
	* 
	* ==> coredns [6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56992 - 6829 "HINFO IN 55054332545358397.485839749512134514. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.037817861s
	
	* 
	* ==> coredns [b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46008 - 62895 "HINFO IN 5083142650138728424.8248962513567012342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024009831s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-235090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-235090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=pause-235090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_51_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:51:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-235090
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:53:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:51:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:51:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:51:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:52:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-235090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c059abc9ed041bb9e4add7a29440c15
	  System UUID:                644bb90f-91ba-4716-84ce-d58ea0025b06
	  Boot ID:                    619e9679-c801-4966-a4f0-8d68f85af04f
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-84s28                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     81s
	  kube-system                 etcd-pause-235090                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kindnet-ntfxs                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      81s
	  kube-system                 kube-apiserver-pause-235090             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-pause-235090    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-q7sdt                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-pause-235090             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 79s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  Starting                 46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node pause-235090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node pause-235090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x8 over 107s)  kubelet          Node pause-235090 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     94s                  kubelet          Node pause-235090 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    94s                  kubelet          Node pause-235090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  94s                  kubelet          Node pause-235090 status is now: NodeHasSufficientMemory
	  Normal  Starting                 94s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           82s                  node-controller  Node pause-235090 event: Registered Node pause-235090 in Controller
	  Normal  NodeReady                79s                  kubelet          Node pause-235090 status is now: NodeReady
	  Normal  Starting                 27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)    kubelet          Node pause-235090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)    kubelet          Node pause-235090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x8 over 27s)    kubelet          Node pause-235090 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6s                   node-controller  Node pause-235090 event: Registered Node pause-235090 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001109] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000704] FS-Cache: N-cookie c=00000053 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001037] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +0.002754] FS-Cache: Duplicate cookie detected
	[  +0.000682] FS-Cache: O-cookie c=0000004d [p=0000004a fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000005885d3f4
	[  +0.001100] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000054 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000915] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000009c3c0e5e
	[  +0.001020] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +2.998730] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=0000004b [p=0000004a fl=226 nc=0 na=1]
	[  +0.000947] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000003ef1d116
	[  +0.001076] FS-Cache: O-key=[8] '6ed7c90000000000'
	[  +0.000702] FS-Cache: N-cookie c=00000056 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=0000000003824801
	[  +0.001036] FS-Cache: N-key=[8] '6ed7c90000000000'
	[  +0.302950] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000050 [p=0000004a fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=00000000b99a9016
	[  +0.001212] FS-Cache: O-key=[8] '74d7c90000000000'
	[  +0.000807] FS-Cache: N-cookie c=00000057 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000966] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001183] FS-Cache: N-key=[8] '74d7c90000000000'
	
	* 
	* ==> etcd [6353ff01ab7d39192ebd50473fa23e2f6bb9ea623ff1c00d65c804fc465cd7c9] <==
	* {"level":"info","ts":"2023-10-05T21:52:54.816405Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:52:54.816414Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:52:54.816658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-10-05T21:52:54.816731Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-10-05T21:52:54.816814Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:52:54.816856Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:52:54.848558Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-05T21:52:54.848763Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-05T21:52:54.848787Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-05T21:52:54.84889Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:54.848899Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:56.039451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:56.039496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:56.039524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:56.039538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.039544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.039554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.039562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.049663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:56.050668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-05T21:52:56.051029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:56.051849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:52:56.049624Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-235090 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:52:56.052337Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:52:56.097487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44] <==
	* {"level":"info","ts":"2023-10-05T21:52:25.507322Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:26.88386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-05T21:52:26.883903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-05T21:52:26.88393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-05T21:52:26.883944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.88395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.883961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.88397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.88491Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-235090 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:52:26.884947Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:26.884986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:26.886224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:52:26.885099Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:52:26.886323Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-05T21:52:26.894242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-05T21:52:34.429446Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-05T21:52:34.429505Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-235090","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-05T21:52:34.429609Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-05T21:52:34.429689Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-05T21:52:34.464775Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-05T21:52:34.464915Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-05T21:52:34.464981Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-05T21:52:34.467312Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:34.467522Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:34.467571Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-235090","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  21:53:20 up  7:35,  0 users,  load average: 5.74, 3.23, 2.13
	Linux pause-235090 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952] <==
	* I1005 21:53:02.929239       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1005 21:53:02.929574       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1005 21:53:02.929761       1 main.go:116] setting mtu 1500 for CNI 
	I1005 21:53:02.930502       1 main.go:146] kindnetd IP family: "ipv4"
	I1005 21:53:02.930574       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1005 21:53:03.332997       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1005 21:53:03.333026       1 main.go:227] handling current node
	I1005 21:53:13.353927       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1005 21:53:13.354056       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796] <==
	* I1005 21:52:28.511412       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1005 21:52:28.511676       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1005 21:52:28.511882       1 main.go:116] setting mtu 1500 for CNI 
	I1005 21:52:28.511950       1 main.go:146] kindnetd IP family: "ipv4"
	I1005 21:52:28.511990       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1005 21:52:32.595829       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1005 21:52:32.595867       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [33cacc9697a38de5e15d23b014b59e1c6336291a819ecb274d69930802781451] <==
	* I1005 21:53:01.779916       1 naming_controller.go:291] Starting NamingConditionController
	I1005 21:53:01.779958       1 establishing_controller.go:76] Starting EstablishingController
	I1005 21:53:01.779998       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1005 21:53:01.780038       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1005 21:53:01.780078       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1005 21:53:01.903293       1 shared_informer.go:318] Caches are synced for configmaps
	I1005 21:53:01.916661       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1005 21:53:01.921730       1 aggregator.go:166] initial CRD sync complete...
	I1005 21:53:01.921823       1 autoregister_controller.go:141] Starting autoregister controller
	I1005 21:53:01.923560       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1005 21:53:01.923623       1 cache.go:39] Caches are synced for autoregister controller
	I1005 21:53:01.926590       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 21:53:01.964584       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1005 21:53:01.966926       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1005 21:53:02.002738       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1005 21:53:02.002877       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1005 21:53:02.005480       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 21:53:02.011170       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1005 21:53:02.011290       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 21:53:02.683935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1005 21:53:04.726804       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1005 21:53:04.909913       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1005 21:53:04.928078       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1005 21:53:05.000660       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 21:53:05.022511       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1005 21:52:49.790271       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1005 21:52:49.804009       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1005 21:52:49.826681       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d] <==
	* I1005 21:52:30.655735       1 serving.go:348] Generated self-signed cert in-memory
	I1005 21:52:33.554877       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1005 21:52:33.554993       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:52:33.556406       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1005 21:52:33.556613       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1005 21:52:33.557619       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1005 21:52:33.557703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [aecb8aedf650580a9bed8ff109591ffc948db3d88232d00b3c7d4da59388e5ef] <==
	* I1005 21:53:14.696584       1 shared_informer.go:318] Caches are synced for service account
	I1005 21:53:14.696985       1 shared_informer.go:318] Caches are synced for namespace
	I1005 21:53:14.705236       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"pause-235090\" does not exist"
	I1005 21:53:14.718405       1 shared_informer.go:318] Caches are synced for resource quota
	I1005 21:53:14.727375       1 shared_informer.go:318] Caches are synced for node
	I1005 21:53:14.727557       1 range_allocator.go:174] "Sending events to api server"
	I1005 21:53:14.727592       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1005 21:53:14.727598       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1005 21:53:14.727611       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1005 21:53:14.730680       1 shared_informer.go:318] Caches are synced for GC
	I1005 21:53:14.737606       1 shared_informer.go:318] Caches are synced for attach detach
	I1005 21:53:14.741816       1 shared_informer.go:318] Caches are synced for daemon sets
	I1005 21:53:14.741918       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1005 21:53:14.754517       1 shared_informer.go:318] Caches are synced for taint
	I1005 21:53:14.754773       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1005 21:53:14.754894       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-235090"
	I1005 21:53:14.754944       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1005 21:53:14.754963       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1005 21:53:14.754978       1 taint_manager.go:211] "Sending events to api server"
	I1005 21:53:14.755620       1 event.go:307] "Event occurred" object="pause-235090" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-235090 event: Registered Node pause-235090 in Controller"
	I1005 21:53:14.792583       1 shared_informer.go:318] Caches are synced for TTL
	I1005 21:53:14.805386       1 shared_informer.go:318] Caches are synced for persistent volume
	I1005 21:53:15.122054       1 shared_informer.go:318] Caches are synced for garbage collector
	I1005 21:53:15.164412       1 shared_informer.go:318] Caches are synced for garbage collector
	I1005 21:53:15.164448       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4] <==
	* I1005 21:52:32.744013       1 server_others.go:69] "Using iptables proxy"
	I1005 21:52:33.062728       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1005 21:52:33.652176       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:52:33.658973       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:52:33.659090       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:52:33.659123       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:52:33.659265       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:52:33.659520       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:52:33.659762       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:52:33.660701       1 config.go:188] "Starting service config controller"
	I1005 21:52:33.660801       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:52:33.660862       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:52:33.660903       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:52:33.661659       1 config.go:315] "Starting node config controller"
	I1005 21:52:33.661716       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:52:33.765152       1 shared_informer.go:318] Caches are synced for node config
	I1005 21:52:33.784280       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:52:33.784308       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37] <==
	* I1005 21:53:03.105053       1 server_others.go:69] "Using iptables proxy"
	I1005 21:53:03.177832       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1005 21:53:03.448541       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:53:03.475784       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:53:03.475832       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:53:03.475842       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:53:03.475897       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:53:03.480064       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:53:03.480089       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:53:03.485482       1 config.go:188] "Starting service config controller"
	I1005 21:53:03.486607       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:53:03.486733       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:53:03.489172       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:53:03.509194       1 config.go:315] "Starting node config controller"
	I1005 21:53:03.509302       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:53:03.587706       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:53:03.589950       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1005 21:53:03.610356       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [64a528443baf725492413f2f9d751e39b79a50738c6a4cb4dea6bec7dfba46b8] <==
	* I1005 21:52:58.805312       1 serving.go:348] Generated self-signed cert in-memory
	W1005 21:53:01.790069       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1005 21:53:01.790178       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 21:53:01.790211       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1005 21:53:01.790252       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1005 21:53:01.893024       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1005 21:53:01.896485       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:53:01.898573       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1005 21:53:01.906421       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1005 21:53:01.906538       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:53:01.906593       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1005 21:53:01.923342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:53:01.923467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1005 21:53:02.009896       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0] <==
	* E1005 21:52:32.478226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1005 21:52:32.478792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.478813       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.478886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.478899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.478956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1005 21:52:32.478966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1005 21:52:32.481681       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:52:32.481712       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 21:52:32.498930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 21:52:32.498967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 21:52:32.499051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 21:52:32.499100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1005 21:52:32.538477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.538602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.538750       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.538794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.541591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 21:52:32.541670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1005 21:52:32.541775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 21:52:32.541825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1005 21:52:32.541874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 21:52:32.541926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1005 21:52:33.660408       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1005 21:52:34.584653       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.132183    3523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-235090&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: W1005 21:52:54.186305    3523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.186377    3523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: W1005 21:52:54.480024    3523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.480099    3523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.586308    3523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-235090?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="1.6s"
	Oct 05 21:52:54 pause-235090 kubelet[3523]: I1005 21:52:54.694258    3523 kubelet_node_status.go:70] "Attempting to register node" node="pause-235090"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.968173    3523 kubelet_node_status.go:108] "Node was previously registered" node="pause-235090"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.968289    3523 kubelet_node_status.go:73] "Successfully registered node" node="pause-235090"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.971351    3523 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.972277    3523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.141905    3523 apiserver.go:52] "Watching apiserver"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.147190    3523 topology_manager.go:215] "Topology Admit Handler" podUID="d6f70b29-95e2-4894-95d2-97463d8af989" podNamespace="kube-system" podName="kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.147332    3523 topology_manager.go:215] "Topology Admit Handler" podUID="f45facf4-987f-4d09-bc27-1f5cd7879216" podNamespace="kube-system" podName="kube-proxy-q7sdt"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.147385    3523 topology_manager.go:215] "Topology Admit Handler" podUID="f9362fc7-f2d0-411f-a717-fa70ffafabcb" podNamespace="kube-system" podName="coredns-5dd5756b68-84s28"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.171095    3523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173425    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6f70b29-95e2-4894-95d2-97463d8af989-cni-cfg\") pod \"kindnet-ntfxs\" (UID: \"d6f70b29-95e2-4894-95d2-97463d8af989\") " pod="kube-system/kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173493    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45facf4-987f-4d09-bc27-1f5cd7879216-lib-modules\") pod \"kube-proxy-q7sdt\" (UID: \"f45facf4-987f-4d09-bc27-1f5cd7879216\") " pod="kube-system/kube-proxy-q7sdt"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173531    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6f70b29-95e2-4894-95d2-97463d8af989-xtables-lock\") pod \"kindnet-ntfxs\" (UID: \"d6f70b29-95e2-4894-95d2-97463d8af989\") " pod="kube-system/kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173585    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f45facf4-987f-4d09-bc27-1f5cd7879216-xtables-lock\") pod \"kube-proxy-q7sdt\" (UID: \"f45facf4-987f-4d09-bc27-1f5cd7879216\") " pod="kube-system/kube-proxy-q7sdt"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173623    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6f70b29-95e2-4894-95d2-97463d8af989-lib-modules\") pod \"kindnet-ntfxs\" (UID: \"d6f70b29-95e2-4894-95d2-97463d8af989\") " pod="kube-system/kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.447813    3523 scope.go:117] "RemoveContainer" containerID="b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.449406    3523 scope.go:117] "RemoveContainer" containerID="dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.449836    3523 scope.go:117] "RemoveContainer" containerID="614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4"
	Oct 05 21:53:08 pause-235090 kubelet[3523]: I1005 21:53:08.158208    3523 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-235090 -n pause-235090
helpers_test.go:261: (dbg) Run:  kubectl --context pause-235090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-235090
helpers_test.go:235: (dbg) docker inspect pause-235090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a",
	        "Created": "2023-10-05T21:51:15.825802101Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1553773,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-05T21:51:16.367485147Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/hosts",
	        "LogPath": "/var/lib/docker/containers/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a/fe18557f83a908ed96d0b8e6fd84a55b26f7f7863867967a162c3c51478b943a-json.log",
	        "Name": "/pause-235090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-235090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-235090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c-init/diff:/var/lib/docker/overlay2/d90b9e2f667f252141d832d5a382f20f93e3e59a1248437095891beeaafeffd3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9243ac8ab9b5930324b58cf97d331784661ad28ed35d9e8cdcac8a69e82d9c8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-235090",
	                "Source": "/var/lib/docker/volumes/pause-235090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-235090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-235090",
	                "name.minikube.sigs.k8s.io": "pause-235090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c94af4d8234b208cb7ff001f333803da92f415c7111320ee01e963d67806d99e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34222"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34220"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c94af4d8234b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-235090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "fe18557f83a9",
	                        "pause-235090"
	                    ],
	                    "NetworkID": "a38500a611f613a10c557da5d0aa104bcdd8797878e4db0b60837f68e8afd5da",
	                    "EndpointID": "11b0d0ae216441da70941560a68e1a447561747f0acc343cb250fb6050362693",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-235090 -n pause-235090
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-235090 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-235090 logs -n 25: (2.402576924s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo docker                         | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo cat                            | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo                                | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo find                           | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-798214 sudo crio                           | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-798214                                     | cilium-798214             | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC | 05 Oct 23 21:52 UTC |
	| start   | -p force-systemd-env-782488                          | force-systemd-env-782488  | jenkins | v1.31.2 | 05 Oct 23 21:52 UTC | 05 Oct 23 21:53 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-782488                          | force-systemd-env-782488  | jenkins | v1.31.2 | 05 Oct 23 21:53 UTC | 05 Oct 23 21:53 UTC |
	| start   | -p force-systemd-flag-591577                         | force-systemd-flag-591577 | jenkins | v1.31.2 | 05 Oct 23 21:53 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:53:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:53:11.183987 1567019 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:53:11.184202 1567019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:53:11.184230 1567019 out.go:309] Setting ErrFile to fd 2...
	I1005 21:53:11.184251 1567019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:53:11.184614 1567019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:53:11.185106 1567019 out.go:303] Setting JSON to false
	I1005 21:53:11.186446 1567019 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27339,"bootTime":1696515453,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:53:11.186523 1567019 start.go:138] virtualization:  
	I1005 21:53:11.190893 1567019 out.go:177] * [force-systemd-flag-591577] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:53:11.192756 1567019 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:53:11.194280 1567019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:53:11.193004 1567019 notify.go:220] Checking for updates...
	I1005 21:53:11.198607 1567019 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:53:11.200534 1567019 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:53:11.202188 1567019 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:53:11.203894 1567019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:53:11.206336 1567019 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:53:11.206486 1567019 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:53:11.231659 1567019 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:53:11.231761 1567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:53:11.328008 1567019 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:53:11.317859953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:53:11.328115 1567019 docker.go:294] overlay module found
	I1005 21:53:11.330227 1567019 out.go:177] * Using the docker driver based on user configuration
	I1005 21:53:11.332064 1567019 start.go:298] selected driver: docker
	I1005 21:53:11.332082 1567019 start.go:902] validating driver "docker" against <nil>
	I1005 21:53:11.332096 1567019 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:53:11.332755 1567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:53:11.404467 1567019 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:53:11.393856214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:53:11.404636 1567019 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:53:11.404854 1567019 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1005 21:53:11.406777 1567019 out.go:177] * Using Docker driver with root privileges
	I1005 21:53:11.408835 1567019 cni.go:84] Creating CNI manager for ""
	I1005 21:53:11.408864 1567019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:53:11.408877 1567019 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:53:11.408894 1567019 start_flags.go:321] config:
	{Name:force-systemd-flag-591577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-591577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:53:11.411344 1567019 out.go:177] * Starting control plane node force-systemd-flag-591577 in cluster force-systemd-flag-591577
	I1005 21:53:11.413323 1567019 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:53:11.415333 1567019 out.go:177] * Pulling base image ...
	I1005 21:53:11.417148 1567019 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:53:11.417182 1567019 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:53:11.417201 1567019 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1005 21:53:11.417212 1567019 cache.go:57] Caching tarball of preloaded images
	I1005 21:53:11.417297 1567019 preload.go:174] Found /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1005 21:53:11.417307 1567019 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1005 21:53:11.417467 1567019 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/force-systemd-flag-591577/config.json ...
	I1005 21:53:11.417499 1567019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/force-systemd-flag-591577/config.json: {Name:mkf3d35cf33f196ae8c4bd78be14b4517e616eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:11.441456 1567019 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1005 21:53:11.441478 1567019 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1005 21:53:11.441499 1567019 cache.go:195] Successfully downloaded all kic artifacts
	I1005 21:53:11.441529 1567019 start.go:365] acquiring machines lock for force-systemd-flag-591577: {Name:mk664635e434a5a923a238d051a37faacfe7887a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 21:53:11.441715 1567019 start.go:369] acquired machines lock for "force-systemd-flag-591577" in 158.958µs
	I1005 21:53:11.441744 1567019 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-591577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-flag-591577 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:53:11.441820 1567019 start.go:125] createHost starting for "" (driver="docker")
	I1005 21:53:10.591294 1559793 pod_ready.go:102] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"False"
	I1005 21:53:12.591330 1559793 pod_ready.go:102] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"False"
	I1005 21:53:14.108080 1559793 pod_ready.go:92] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.108149 1559793 pod_ready.go:81] duration metric: took 5.538517474s waiting for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.108176 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.120480 1559793 pod_ready.go:92] pod "kube-apiserver-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.120540 1559793 pod_ready.go:81] duration metric: took 12.214096ms waiting for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.120575 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.133118 1559793 pod_ready.go:92] pod "kube-controller-manager-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.133187 1559793 pod_ready.go:81] duration metric: took 12.590317ms waiting for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.133214 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.144134 1559793 pod_ready.go:92] pod "kube-proxy-q7sdt" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.144205 1559793 pod_ready.go:81] duration metric: took 10.969224ms waiting for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.144231 1559793 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.153585 1559793 pod_ready.go:92] pod "kube-scheduler-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.153605 1559793 pod_ready.go:81] duration metric: took 9.354178ms waiting for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.153615 1559793 pod_ready.go:38] duration metric: took 9.114777614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:14.153632 1559793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1005 21:53:14.170452 1559793 ops.go:34] apiserver oom_adj: -16
	I1005 21:53:14.170472 1559793 kubeadm.go:640] restartCluster took 57.20564951s
	I1005 21:53:14.170482 1559793 kubeadm.go:406] StartCluster complete in 57.328321693s
	I1005 21:53:14.170498 1559793 settings.go:142] acquiring lock: {Name:mk7dada861cf2ca4f44d224c602a8425f2d31baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:14.170563 1559793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:53:14.171226 1559793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17363-1448442/kubeconfig: {Name:mkcdb0cb77435bcc2d7e177116f1a594e64ff454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1005 21:53:14.171969 1559793 kapi.go:59] client config for pause-235090: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.crt", KeyFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key", CAFile:"/home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16a20f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1005 21:53:14.172413 1559793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1005 21:53:14.172747 1559793 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:53:14.172784 1559793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1005 21:53:14.176295 1559793 out.go:177] * Enabled addons: 
	I1005 21:53:14.178016 1559793 addons.go:502] enable addons completed in 5.226528ms: enabled=[]
	I1005 21:53:14.184006 1559793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-235090" context rescaled to 1 replicas
	I1005 21:53:14.184044 1559793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1005 21:53:14.185870 1559793 out.go:177] * Verifying Kubernetes components...
	I1005 21:53:14.187640 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:53:14.382467 1559793 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1005 21:53:14.382461 1559793 node_ready.go:35] waiting up to 6m0s for node "pause-235090" to be "Ready" ...
	I1005 21:53:14.388106 1559793 node_ready.go:49] node "pause-235090" has status "Ready":"True"
	I1005 21:53:14.388130 1559793 node_ready.go:38] duration metric: took 5.574639ms waiting for node "pause-235090" to be "Ready" ...
	I1005 21:53:14.388141 1559793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:14.496213 1559793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:11.444395 1567019 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1005 21:53:11.444664 1567019 start.go:159] libmachine.API.Create for "force-systemd-flag-591577" (driver="docker")
	I1005 21:53:11.444692 1567019 client.go:168] LocalClient.Create starting
	I1005 21:53:11.444777 1567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem
	I1005 21:53:11.444816 1567019 main.go:141] libmachine: Decoding PEM data...
	I1005 21:53:11.444830 1567019 main.go:141] libmachine: Parsing certificate...
	I1005 21:53:11.444899 1567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem
	I1005 21:53:11.444917 1567019 main.go:141] libmachine: Decoding PEM data...
	I1005 21:53:11.444927 1567019 main.go:141] libmachine: Parsing certificate...
	I1005 21:53:11.445299 1567019 cli_runner.go:164] Run: docker network inspect force-systemd-flag-591577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1005 21:53:11.472105 1567019 cli_runner.go:211] docker network inspect force-systemd-flag-591577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1005 21:53:11.472191 1567019 network_create.go:281] running [docker network inspect force-systemd-flag-591577] to gather additional debugging logs...
	I1005 21:53:11.472207 1567019 cli_runner.go:164] Run: docker network inspect force-systemd-flag-591577
	W1005 21:53:11.498433 1567019 cli_runner.go:211] docker network inspect force-systemd-flag-591577 returned with exit code 1
	I1005 21:53:11.498464 1567019 network_create.go:284] error running [docker network inspect force-systemd-flag-591577]: docker network inspect force-systemd-flag-591577: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-591577 not found
	I1005 21:53:11.498485 1567019 network_create.go:286] output of [docker network inspect force-systemd-flag-591577]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-591577 not found
	
	** /stderr **
	I1005 21:53:11.498822 1567019 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1005 21:53:11.518818 1567019 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d16b9e9a692c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:9e:45:13} reservation:<nil>}
	I1005 21:53:11.519202 1567019 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f25a4bc44290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:89:8c:51:03} reservation:<nil>}
	I1005 21:53:11.519533 1567019 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a38500a611f6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:75:ea:d6:d9} reservation:<nil>}
	I1005 21:53:11.520001 1567019 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001213170}
	I1005 21:53:11.520026 1567019 network_create.go:124] attempt to create docker network force-systemd-flag-591577 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1005 21:53:11.520090 1567019 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-591577 force-systemd-flag-591577
	I1005 21:53:11.599587 1567019 network_create.go:108] docker network force-systemd-flag-591577 192.168.76.0/24 created
	I1005 21:53:11.599620 1567019 kic.go:117] calculated static IP "192.168.76.2" for the "force-systemd-flag-591577" container
	I1005 21:53:11.599691 1567019 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1005 21:53:11.617676 1567019 cli_runner.go:164] Run: docker volume create force-systemd-flag-591577 --label name.minikube.sigs.k8s.io=force-systemd-flag-591577 --label created_by.minikube.sigs.k8s.io=true
	I1005 21:53:11.636614 1567019 oci.go:103] Successfully created a docker volume force-systemd-flag-591577
	I1005 21:53:11.636717 1567019 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-591577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-591577 --entrypoint /usr/bin/test -v force-systemd-flag-591577:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1005 21:53:12.293040 1567019 oci.go:107] Successfully prepared a docker volume force-systemd-flag-591577
	I1005 21:53:12.293088 1567019 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:53:12.293108 1567019 kic.go:190] Starting extracting preloaded images to volume ...
	I1005 21:53:12.293202 1567019 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-591577:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1005 21:53:14.889027 1559793 pod_ready.go:92] pod "coredns-5dd5756b68-84s28" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:14.889048 1559793 pod_ready.go:81] duration metric: took 392.808412ms waiting for pod "coredns-5dd5756b68-84s28" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:14.889061 1559793 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.288189 1559793 pod_ready.go:92] pod "etcd-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:15.288221 1559793 pod_ready.go:81] duration metric: took 399.152165ms waiting for pod "etcd-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.288255 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.689154 1559793 pod_ready.go:92] pod "kube-apiserver-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:15.689181 1559793 pod_ready.go:81] duration metric: took 400.912499ms waiting for pod "kube-apiserver-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:15.689197 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.088471 1559793 pod_ready.go:92] pod "kube-controller-manager-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.088511 1559793 pod_ready.go:81] duration metric: took 399.305626ms waiting for pod "kube-controller-manager-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.088550 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.488283 1559793 pod_ready.go:92] pod "kube-proxy-q7sdt" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.488310 1559793 pod_ready.go:81] duration metric: took 399.749752ms waiting for pod "kube-proxy-q7sdt" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.488322 1559793 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.888834 1559793 pod_ready.go:92] pod "kube-scheduler-pause-235090" in "kube-system" namespace has status "Ready":"True"
	I1005 21:53:16.888857 1559793 pod_ready.go:81] duration metric: took 400.527006ms waiting for pod "kube-scheduler-pause-235090" in "kube-system" namespace to be "Ready" ...
	I1005 21:53:16.888866 1559793 pod_ready.go:38] duration metric: took 2.500715514s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1005 21:53:16.888884 1559793 api_server.go:52] waiting for apiserver process to appear ...
	I1005 21:53:16.888962 1559793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:53:16.907528 1559793 api_server.go:72] duration metric: took 2.723451889s to wait for apiserver process to appear ...
	I1005 21:53:16.907553 1559793 api_server.go:88] waiting for apiserver healthz status ...
	I1005 21:53:16.907570 1559793 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1005 21:53:16.918166 1559793 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1005 21:53:16.919632 1559793 api_server.go:141] control plane version: v1.28.2
	I1005 21:53:16.919653 1559793 api_server.go:131] duration metric: took 12.093841ms to wait for apiserver health ...
	I1005 21:53:16.919661 1559793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1005 21:53:17.090937 1559793 system_pods.go:59] 7 kube-system pods found
	I1005 21:53:17.091042 1559793 system_pods.go:61] "coredns-5dd5756b68-84s28" [f9362fc7-f2d0-411f-a717-fa70ffafabcb] Running
	I1005 21:53:17.091065 1559793 system_pods.go:61] "etcd-pause-235090" [5f6212e1-25e9-4349-a0b6-57713e56575c] Running
	I1005 21:53:17.091097 1559793 system_pods.go:61] "kindnet-ntfxs" [d6f70b29-95e2-4894-95d2-97463d8af989] Running
	I1005 21:53:17.091126 1559793 system_pods.go:61] "kube-apiserver-pause-235090" [d8f7fb78-a561-4232-ae85-e644e98215ac] Running
	I1005 21:53:17.091147 1559793 system_pods.go:61] "kube-controller-manager-pause-235090" [979b7651-6394-4b72-8be7-a48d6daa7cd6] Running
	I1005 21:53:17.091176 1559793 system_pods.go:61] "kube-proxy-q7sdt" [f45facf4-987f-4d09-bc27-1f5cd7879216] Running
	I1005 21:53:17.091195 1559793 system_pods.go:61] "kube-scheduler-pause-235090" [eb7cda73-019e-4978-822f-4439a1907bc1] Running
	I1005 21:53:17.091212 1559793 system_pods.go:74] duration metric: took 171.544601ms to wait for pod list to return data ...
	I1005 21:53:17.091252 1559793 default_sa.go:34] waiting for default service account to be created ...
	I1005 21:53:17.290673 1559793 default_sa.go:45] found service account: "default"
	I1005 21:53:17.290699 1559793 default_sa.go:55] duration metric: took 199.427869ms for default service account to be created ...
	I1005 21:53:17.290711 1559793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1005 21:53:17.492133 1559793 system_pods.go:86] 7 kube-system pods found
	I1005 21:53:17.492162 1559793 system_pods.go:89] "coredns-5dd5756b68-84s28" [f9362fc7-f2d0-411f-a717-fa70ffafabcb] Running
	I1005 21:53:17.492170 1559793 system_pods.go:89] "etcd-pause-235090" [5f6212e1-25e9-4349-a0b6-57713e56575c] Running
	I1005 21:53:17.492176 1559793 system_pods.go:89] "kindnet-ntfxs" [d6f70b29-95e2-4894-95d2-97463d8af989] Running
	I1005 21:53:17.492181 1559793 system_pods.go:89] "kube-apiserver-pause-235090" [d8f7fb78-a561-4232-ae85-e644e98215ac] Running
	I1005 21:53:17.492209 1559793 system_pods.go:89] "kube-controller-manager-pause-235090" [979b7651-6394-4b72-8be7-a48d6daa7cd6] Running
	I1005 21:53:17.492215 1559793 system_pods.go:89] "kube-proxy-q7sdt" [f45facf4-987f-4d09-bc27-1f5cd7879216] Running
	I1005 21:53:17.492226 1559793 system_pods.go:89] "kube-scheduler-pause-235090" [eb7cda73-019e-4978-822f-4439a1907bc1] Running
	I1005 21:53:17.492234 1559793 system_pods.go:126] duration metric: took 201.516793ms to wait for k8s-apps to be running ...
	I1005 21:53:17.492248 1559793 system_svc.go:44] waiting for kubelet service to be running ....
	I1005 21:53:17.492353 1559793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:53:17.521660 1559793 system_svc.go:56] duration metric: took 29.387989ms WaitForService to wait for kubelet.
	I1005 21:53:17.521685 1559793 kubeadm.go:581] duration metric: took 3.337617493s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1005 21:53:17.521705 1559793 node_conditions.go:102] verifying NodePressure condition ...
	I1005 21:53:17.688332 1559793 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1005 21:53:17.688360 1559793 node_conditions.go:123] node cpu capacity is 2
	I1005 21:53:17.688370 1559793 node_conditions.go:105] duration metric: took 166.659808ms to run NodePressure ...
	I1005 21:53:17.688382 1559793 start.go:228] waiting for startup goroutines ...
	I1005 21:53:17.688388 1559793 start.go:233] waiting for cluster config update ...
	I1005 21:53:17.688395 1559793 start.go:242] writing updated cluster config ...
	I1005 21:53:17.688916 1559793 ssh_runner.go:195] Run: rm -f paused
	I1005 21:53:17.828573 1559793 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1005 21:53:17.831375 1559793 out.go:177] * Done! kubectl is now configured to use "pause-235090" cluster and "default" namespace by default
	I1005 21:53:16.768521 1567019 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-591577:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.475275628s)
	I1005 21:53:16.768569 1567019 kic.go:199] duration metric: took 4.475457 seconds to extract preloaded images to volume
	W1005 21:53:16.768714 1567019 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1005 21:53:16.768838 1567019 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1005 21:53:16.840876 1567019 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-591577 --name force-systemd-flag-591577 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-591577 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-591577 --network force-systemd-flag-591577 --ip 192.168.76.2 --volume force-systemd-flag-591577:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1005 21:53:17.203532 1567019 cli_runner.go:164] Run: docker container inspect force-systemd-flag-591577 --format={{.State.Running}}
	I1005 21:53:17.230735 1567019 cli_runner.go:164] Run: docker container inspect force-systemd-flag-591577 --format={{.State.Status}}
	I1005 21:53:17.259049 1567019 cli_runner.go:164] Run: docker exec force-systemd-flag-591577 stat /var/lib/dpkg/alternatives/iptables
	I1005 21:53:17.362776 1567019 oci.go:144] the created container "force-systemd-flag-591577" has a running status.
	I1005 21:53:17.362805 1567019 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa...
	I1005 21:53:18.641661 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1005 21:53:18.641709 1567019 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1005 21:53:18.677328 1567019 cli_runner.go:164] Run: docker container inspect force-systemd-flag-591577 --format={{.State.Status}}
	I1005 21:53:18.746691 1567019 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1005 21:53:18.746719 1567019 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-591577 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1005 21:53:18.884328 1567019 cli_runner.go:164] Run: docker container inspect force-systemd-flag-591577 --format={{.State.Status}}
	I1005 21:53:18.913494 1567019 machine.go:88] provisioning docker machine ...
	I1005 21:53:18.913525 1567019 ubuntu.go:169] provisioning hostname "force-systemd-flag-591577"
	I1005 21:53:18.913606 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:18.944838 1567019 main.go:141] libmachine: Using SSH client type: native
	I1005 21:53:18.945318 1567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1005 21:53:18.945370 1567019 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-591577 && echo "force-systemd-flag-591577" | sudo tee /etc/hostname
	I1005 21:53:19.124140 1567019 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-591577
	
	I1005 21:53:19.124225 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:19.159025 1567019 main.go:141] libmachine: Using SSH client type: native
	I1005 21:53:19.159442 1567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1005 21:53:19.159468 1567019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-591577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-591577/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-591577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 21:53:19.302751 1567019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 21:53:19.302781 1567019 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 21:53:19.302803 1567019 ubuntu.go:177] setting up certificates
	I1005 21:53:19.302812 1567019 provision.go:83] configureAuth start
	I1005 21:53:19.302876 1567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-591577
	I1005 21:53:19.323840 1567019 provision.go:138] copyHostCerts
	I1005 21:53:19.323888 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:53:19.323919 1567019 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 21:53:19.323930 1567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 21:53:19.324007 1567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 21:53:19.324088 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:53:19.324114 1567019 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 21:53:19.324122 1567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 21:53:19.324153 1567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 21:53:19.324202 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:53:19.324222 1567019 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 21:53:19.324231 1567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 21:53:19.324257 1567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 21:53:19.324315 1567019 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-591577 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-591577]
	I1005 21:53:19.824587 1567019 provision.go:172] copyRemoteCerts
	I1005 21:53:19.824684 1567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 21:53:19.824761 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:19.852566 1567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa Username:docker}
	I1005 21:53:19.956348 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1005 21:53:19.956411 1567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 21:53:20.005050 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1005 21:53:20.005134 1567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1005 21:53:20.049414 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1005 21:53:20.049484 1567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 21:53:20.085104 1567019 provision.go:86] duration metric: configureAuth took 782.26261ms
	I1005 21:53:20.085186 1567019 ubuntu.go:193] setting minikube options for container-runtime
	I1005 21:53:20.085457 1567019 config.go:182] Loaded profile config "force-systemd-flag-591577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:53:20.085641 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:20.111077 1567019 main.go:141] libmachine: Using SSH client type: native
	I1005 21:53:20.111694 1567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34247 <nil> <nil>}
	I1005 21:53:20.111749 1567019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 21:53:20.445660 1567019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 21:53:20.445750 1567019 machine.go:91] provisioned docker machine in 1.532234468s
	I1005 21:53:20.445782 1567019 client.go:171] LocalClient.Create took 9.001082998s
	I1005 21:53:20.445820 1567019 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-591577" took 9.001157681s
	I1005 21:53:20.445843 1567019 start.go:300] post-start starting for "force-systemd-flag-591577" (driver="docker")
	I1005 21:53:20.445890 1567019 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 21:53:20.445988 1567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 21:53:20.446067 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:20.466469 1567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa Username:docker}
	I1005 21:53:20.566421 1567019 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 21:53:20.572209 1567019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 21:53:20.572291 1567019 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 21:53:20.572319 1567019 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 21:53:20.572344 1567019 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1005 21:53:20.572375 1567019 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 21:53:20.572460 1567019 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 21:53:20.572568 1567019 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 21:53:20.572598 1567019 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> /etc/ssl/certs/14537862.pem
	I1005 21:53:20.572718 1567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 21:53:20.585916 1567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 21:53:20.626952 1567019 start.go:303] post-start completed in 181.05799ms
	I1005 21:53:20.627386 1567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-591577
	I1005 21:53:20.649308 1567019 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/force-systemd-flag-591577/config.json ...
	I1005 21:53:20.649696 1567019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:53:20.649747 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:20.675822 1567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa Username:docker}
	I1005 21:53:20.767929 1567019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 21:53:20.775050 1567019 start.go:128] duration metric: createHost completed in 9.333213363s
	I1005 21:53:20.775079 1567019 start.go:83] releasing machines lock for "force-systemd-flag-591577", held for 9.333354056s
	I1005 21:53:20.775150 1567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-591577
	I1005 21:53:20.796849 1567019 ssh_runner.go:195] Run: cat /version.json
	I1005 21:53:20.796904 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:20.797180 1567019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 21:53:20.797258 1567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-591577
	I1005 21:53:20.842033 1567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa Username:docker}
	I1005 21:53:20.853598 1567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34247 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/force-systemd-flag-591577/id_rsa Username:docker}
	I1005 21:53:21.124743 1567019 ssh_runner.go:195] Run: systemctl --version
	I1005 21:53:21.131827 1567019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	
	* 
	* ==> CRI-O <==
	* Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.454669707Z" level=info msg="Creating container: kube-system/kindnet-ntfxs/kindnet-cni" id=c65c0f3c-7ec5-49f6-ab26-6100ff57992a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.454724361Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.488301579Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3ea67d8620f5a8b6787e3045994419f734f90704ed707865c5d1f369101f62e0/merged/etc/passwd: no such file or directory"
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.488768310Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3ea67d8620f5a8b6787e3045994419f734f90704ed707865c5d1f369101f62e0/merged/etc/group: no such file or directory"
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.741779994Z" level=info msg="Created container 6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6: kube-system/coredns-5dd5756b68-84s28/coredns" id=4f085f2f-ff69-46b0-877b-208cf76ec40d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.742655144Z" level=info msg="Starting container: 6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6" id=43d74176-444f-415a-9e4d-4c88e83f5202 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.767415061Z" level=info msg="Started container" PID=3772 containerID=6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6 description=kube-system/coredns-5dd5756b68-84s28/coredns id=43d74176-444f-415a-9e4d-4c88e83f5202 name=/runtime.v1.RuntimeService/StartContainer sandboxID=932929e3687b0a33fadf97d4460547523107972997c6f040e3e0cac5fef44c70
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.815764148Z" level=info msg="Created container 997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952: kube-system/kindnet-ntfxs/kindnet-cni" id=c65c0f3c-7ec5-49f6-ab26-6100ff57992a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.821937227Z" level=info msg="Starting container: 997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952" id=02d81017-e5ab-4aef-b8fc-75ec0176551d name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.826359276Z" level=info msg="Created container 9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37: kube-system/kube-proxy-q7sdt/kube-proxy" id=6de4b439-80cc-4631-a4af-80cb79cb8889 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.829872084Z" level=info msg="Starting container: 9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37" id=3e5ca454-9e9c-4898-bd77-d99fe7da3145 name=/runtime.v1.RuntimeService/StartContainer
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.857908205Z" level=info msg="Started container" PID=3789 containerID=997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952 description=kube-system/kindnet-ntfxs/kindnet-cni id=02d81017-e5ab-4aef-b8fc-75ec0176551d name=/runtime.v1.RuntimeService/StartContainer sandboxID=600beb5cc0b7b20340c07b218da8184f72bffa203f2a845790c78c374be4adab
	Oct 05 21:53:02 pause-235090 crio[2524]: time="2023-10-05 21:53:02.859604220Z" level=info msg="Started container" PID=3782 containerID=9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37 description=kube-system/kube-proxy-q7sdt/kube-proxy id=3e5ca454-9e9c-4898-bd77-d99fe7da3145 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5d1b1958be9ccd2cb88fb3e1429ccbc440797405b4a96bc7c168ef3eae9e238d
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.339044666Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.363306064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.363349650Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.363370967Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.387902585Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.387946064Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.387967979Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.396633436Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.396681543Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.396698593Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.416056811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 05 21:53:03 pause-235090 crio[2524]: time="2023-10-05 21:53:03.416092848Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	997042ab3c1a2       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   20 seconds ago       Running             kindnet-cni               3                   600beb5cc0b7b       kindnet-ntfxs
	9359faa106d29       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   20 seconds ago       Running             kube-proxy                3                   5d1b1958be9cc       kube-proxy-q7sdt
	6b1ed923e252e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   20 seconds ago       Running             coredns                   2                   932929e3687b0       coredns-5dd5756b68-84s28
	aecb8aedf6505       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   29 seconds ago       Running             kube-controller-manager   2                   83bc0a0b9c968       kube-controller-manager-pause-235090
	33cacc9697a38       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   29 seconds ago       Running             kube-apiserver            2                   51f7a2b5b4427       kube-apiserver-pause-235090
	64a528443baf7       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   29 seconds ago       Running             kube-scheduler            2                   c820472ec9b8a       kube-scheduler-pause-235090
	6353ff01ab7d3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   29 seconds ago       Running             etcd                      3                   6da149e9221cd       etcd-pause-235090
	614251a774215       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   52 seconds ago       Exited              kube-proxy                2                   5d1b1958be9cc       kube-proxy-q7sdt
	dcf8d0b0bd636       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   55 seconds ago       Exited              kindnet-cni               2                   600beb5cc0b7b       kindnet-ntfxs
	6e63f3937b88b       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   55 seconds ago       Exited              kube-controller-manager   1                   83bc0a0b9c968       kube-controller-manager-pause-235090
	6837ad1a1228a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   58 seconds ago       Exited              etcd                      2                   6da149e9221cd       etcd-pause-235090
	bf2ac0efa83bc       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   About a minute ago   Exited              kube-scheduler            1                   c820472ec9b8a       kube-scheduler-pause-235090
	b19ba02b4f418       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Exited              coredns                   1                   932929e3687b0       coredns-5dd5756b68-84s28
	bec04f1405f80       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   About a minute ago   Exited              kube-apiserver            1                   51f7a2b5b4427       kube-apiserver-pause-235090
	
	* 
	* ==> coredns [6b1ed923e252e66ce55a29a2512e6d956fff8ea451e92ea6e3df941d0e8594b6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56992 - 6829 "HINFO IN 55054332545358397.485839749512134514. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.037817861s
	
	* 
	* ==> coredns [b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46008 - 62895 "HINFO IN 5083142650138728424.8248962513567012342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024009831s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-235090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-235090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=300d55cee86053f5b4c7a654fc8e7b9d3c030d53
	                    minikube.k8s.io/name=pause-235090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_05T21_51_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Oct 2023 21:51:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-235090
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Oct 2023 21:53:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:51:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:51:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:51:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Oct 2023 21:53:01 +0000   Thu, 05 Oct 2023 21:52:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-235090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c059abc9ed041bb9e4add7a29440c15
	  System UUID:                644bb90f-91ba-4716-84ce-d58ea0025b06
	  Boot ID:                    619e9679-c801-4966-a4f0-8d68f85af04f
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-84s28                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     84s
	  kube-system                 etcd-pause-235090                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         97s
	  kube-system                 kindnet-ntfxs                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      84s
	  kube-system                 kube-apiserver-pause-235090             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-controller-manager-pause-235090    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-proxy-q7sdt                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-pause-235090             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 83s                  kube-proxy       
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  110s (x8 over 110s)  kubelet          Node pause-235090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x8 over 110s)  kubelet          Node pause-235090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x8 over 110s)  kubelet          Node pause-235090 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node pause-235090 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node pause-235090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node pause-235090 status is now: NodeHasSufficientMemory
	  Normal  Starting                 97s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           85s                  node-controller  Node pause-235090 event: Registered Node pause-235090 in Controller
	  Normal  NodeReady                82s                  kubelet          Node pause-235090 status is now: NodeReady
	  Normal  Starting                 30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)    kubelet          Node pause-235090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)    kubelet          Node pause-235090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x8 over 30s)    kubelet          Node pause-235090 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                   node-controller  Node pause-235090 event: Registered Node pause-235090 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001109] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000704] FS-Cache: N-cookie c=00000053 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001037] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +0.002754] FS-Cache: Duplicate cookie detected
	[  +0.000682] FS-Cache: O-cookie c=0000004d [p=0000004a fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000005885d3f4
	[  +0.001100] FS-Cache: O-key=[8] '6fd7c90000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000054 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000915] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000009c3c0e5e
	[  +0.001020] FS-Cache: N-key=[8] '6fd7c90000000000'
	[  +2.998730] FS-Cache: Duplicate cookie detected
	[  +0.000716] FS-Cache: O-cookie c=0000004b [p=0000004a fl=226 nc=0 na=1]
	[  +0.000947] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=000000003ef1d116
	[  +0.001076] FS-Cache: O-key=[8] '6ed7c90000000000'
	[  +0.000702] FS-Cache: N-cookie c=00000056 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=0000000003824801
	[  +0.001036] FS-Cache: N-key=[8] '6ed7c90000000000'
	[  +0.302950] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000050 [p=0000004a fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=00000000a567629d{9p.inode} n=00000000b99a9016
	[  +0.001212] FS-Cache: O-key=[8] '74d7c90000000000'
	[  +0.000807] FS-Cache: N-cookie c=00000057 [p=0000004a fl=2 nc=0 na=1]
	[  +0.000966] FS-Cache: N-cookie d=00000000a567629d{9p.inode} n=000000003c37f5ab
	[  +0.001183] FS-Cache: N-key=[8] '74d7c90000000000'
	
	* 
	* ==> etcd [6353ff01ab7d39192ebd50473fa23e2f6bb9ea623ff1c00d65c804fc465cd7c9] <==
	* {"level":"info","ts":"2023-10-05T21:52:54.816405Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:52:54.816414Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-05T21:52:54.816658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-10-05T21:52:54.816731Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-10-05T21:52:54.816814Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:52:54.816856Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-05T21:52:54.848558Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-05T21:52:54.848763Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-05T21:52:54.848787Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-05T21:52:54.84889Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:54.848899Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:56.039451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:56.039496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:56.039524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:56.039538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.039544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.039554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.039562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-05T21:52:56.049663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:56.050668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-05T21:52:56.051029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:56.051849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:52:56.049624Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-235090 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:52:56.052337Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:52:56.097487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [6837ad1a1228ae1a8b898304913488fa74bd53aa4268ba55e4aafa4086ca4d44] <==
	* {"level":"info","ts":"2023-10-05T21:52:25.507322Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:26.88386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-05T21:52:26.883903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-05T21:52:26.88393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-05T21:52:26.883944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.88395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.883961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.88397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-05T21:52:26.88491Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-235090 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-05T21:52:26.884947Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:26.884986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-05T21:52:26.886224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-05T21:52:26.885099Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-05T21:52:26.886323Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-05T21:52:26.894242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-05T21:52:34.429446Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-05T21:52:34.429505Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-235090","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-05T21:52:34.429609Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-05T21:52:34.429689Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-05T21:52:34.464775Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-05T21:52:34.464915Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-05T21:52:34.464981Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-05T21:52:34.467312Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:34.467522Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-05T21:52:34.467571Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-235090","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  21:53:24 up  7:35,  0 users,  load average: 5.74, 3.23, 2.13
	Linux pause-235090 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [997042ab3c1a2b374a07872a80c9c1ba16fe06bcb8127a96efb2a6a1317a9952] <==
	* I1005 21:53:02.929239       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1005 21:53:02.929574       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1005 21:53:02.929761       1 main.go:116] setting mtu 1500 for CNI 
	I1005 21:53:02.930502       1 main.go:146] kindnetd IP family: "ipv4"
	I1005 21:53:02.930574       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1005 21:53:03.332997       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1005 21:53:03.333026       1 main.go:227] handling current node
	I1005 21:53:13.353927       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1005 21:53:13.354056       1 main.go:227] handling current node
	I1005 21:53:23.374718       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1005 21:53:23.374745       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796] <==
	* I1005 21:52:28.511412       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1005 21:52:28.511676       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1005 21:52:28.511882       1 main.go:116] setting mtu 1500 for CNI 
	I1005 21:52:28.511950       1 main.go:146] kindnetd IP family: "ipv4"
	I1005 21:52:28.511990       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1005 21:52:32.595829       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1005 21:52:32.595867       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [33cacc9697a38de5e15d23b014b59e1c6336291a819ecb274d69930802781451] <==
	* I1005 21:53:01.779916       1 naming_controller.go:291] Starting NamingConditionController
	I1005 21:53:01.779958       1 establishing_controller.go:76] Starting EstablishingController
	I1005 21:53:01.779998       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1005 21:53:01.780038       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1005 21:53:01.780078       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1005 21:53:01.903293       1 shared_informer.go:318] Caches are synced for configmaps
	I1005 21:53:01.916661       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1005 21:53:01.921730       1 aggregator.go:166] initial CRD sync complete...
	I1005 21:53:01.921823       1 autoregister_controller.go:141] Starting autoregister controller
	I1005 21:53:01.923560       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1005 21:53:01.923623       1 cache.go:39] Caches are synced for autoregister controller
	I1005 21:53:01.926590       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1005 21:53:01.964584       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1005 21:53:01.966926       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1005 21:53:02.002738       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1005 21:53:02.002877       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1005 21:53:02.005480       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1005 21:53:02.011170       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1005 21:53:02.011290       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1005 21:53:02.683935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1005 21:53:04.726804       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1005 21:53:04.909913       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1005 21:53:04.928078       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1005 21:53:05.000660       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1005 21:53:05.022511       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [bec04f1405f80243f2f22fef4169262fa753c9c29de60665a4d8075556c302f5] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1005 21:52:49.790271       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1005 21:52:49.804009       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1005 21:52:49.826681       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [6e63f3937b88b2d59b0700bff188bde09a23b7184c2ddce7af5dae53885ee67d] <==
	* I1005 21:52:30.655735       1 serving.go:348] Generated self-signed cert in-memory
	I1005 21:52:33.554877       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1005 21:52:33.554993       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:52:33.556406       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1005 21:52:33.556613       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1005 21:52:33.557619       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1005 21:52:33.557703       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [aecb8aedf650580a9bed8ff109591ffc948db3d88232d00b3c7d4da59388e5ef] <==
	* I1005 21:53:14.696584       1 shared_informer.go:318] Caches are synced for service account
	I1005 21:53:14.696985       1 shared_informer.go:318] Caches are synced for namespace
	I1005 21:53:14.705236       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"pause-235090\" does not exist"
	I1005 21:53:14.718405       1 shared_informer.go:318] Caches are synced for resource quota
	I1005 21:53:14.727375       1 shared_informer.go:318] Caches are synced for node
	I1005 21:53:14.727557       1 range_allocator.go:174] "Sending events to api server"
	I1005 21:53:14.727592       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1005 21:53:14.727598       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1005 21:53:14.727611       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1005 21:53:14.730680       1 shared_informer.go:318] Caches are synced for GC
	I1005 21:53:14.737606       1 shared_informer.go:318] Caches are synced for attach detach
	I1005 21:53:14.741816       1 shared_informer.go:318] Caches are synced for daemon sets
	I1005 21:53:14.741918       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1005 21:53:14.754517       1 shared_informer.go:318] Caches are synced for taint
	I1005 21:53:14.754773       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1005 21:53:14.754894       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-235090"
	I1005 21:53:14.754944       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1005 21:53:14.754963       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1005 21:53:14.754978       1 taint_manager.go:211] "Sending events to api server"
	I1005 21:53:14.755620       1 event.go:307] "Event occurred" object="pause-235090" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-235090 event: Registered Node pause-235090 in Controller"
	I1005 21:53:14.792583       1 shared_informer.go:318] Caches are synced for TTL
	I1005 21:53:14.805386       1 shared_informer.go:318] Caches are synced for persistent volume
	I1005 21:53:15.122054       1 shared_informer.go:318] Caches are synced for garbage collector
	I1005 21:53:15.164412       1 shared_informer.go:318] Caches are synced for garbage collector
	I1005 21:53:15.164448       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4] <==
	* I1005 21:52:32.744013       1 server_others.go:69] "Using iptables proxy"
	I1005 21:52:33.062728       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1005 21:52:33.652176       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:52:33.658973       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:52:33.659090       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:52:33.659123       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:52:33.659265       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:52:33.659520       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:52:33.659762       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:52:33.660701       1 config.go:188] "Starting service config controller"
	I1005 21:52:33.660801       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:52:33.660862       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:52:33.660903       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:52:33.661659       1 config.go:315] "Starting node config controller"
	I1005 21:52:33.661716       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:52:33.765152       1 shared_informer.go:318] Caches are synced for node config
	I1005 21:52:33.784280       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:52:33.784308       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [9359faa106d29308802517c9900ec4ee4bc106cb43fde49c2d77cb122b334a37] <==
	* I1005 21:53:03.105053       1 server_others.go:69] "Using iptables proxy"
	I1005 21:53:03.177832       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1005 21:53:03.448541       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1005 21:53:03.475784       1 server_others.go:152] "Using iptables Proxier"
	I1005 21:53:03.475832       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1005 21:53:03.475842       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1005 21:53:03.475897       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1005 21:53:03.480064       1 server.go:846] "Version info" version="v1.28.2"
	I1005 21:53:03.480089       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:53:03.485482       1 config.go:188] "Starting service config controller"
	I1005 21:53:03.486607       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1005 21:53:03.486733       1 config.go:97] "Starting endpoint slice config controller"
	I1005 21:53:03.489172       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1005 21:53:03.509194       1 config.go:315] "Starting node config controller"
	I1005 21:53:03.509302       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1005 21:53:03.587706       1 shared_informer.go:318] Caches are synced for service config
	I1005 21:53:03.589950       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1005 21:53:03.610356       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [64a528443baf725492413f2f9d751e39b79a50738c6a4cb4dea6bec7dfba46b8] <==
	* I1005 21:52:58.805312       1 serving.go:348] Generated self-signed cert in-memory
	W1005 21:53:01.790069       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1005 21:53:01.790178       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 21:53:01.790211       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1005 21:53:01.790252       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1005 21:53:01.893024       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1005 21:53:01.896485       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1005 21:53:01.898573       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1005 21:53:01.906421       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1005 21:53:01.906538       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1005 21:53:01.906593       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1005 21:53:01.923342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:53:01.923467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1005 21:53:02.009896       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [bf2ac0efa83bc2e366fc5e3b44c634eed008a3f022fd491313f6b1e00916b5f0] <==
	* E1005 21:52:32.478226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1005 21:52:32.478792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.478813       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.478886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.478899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.478956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1005 21:52:32.478966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1005 21:52:32.481681       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1005 21:52:32.481712       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1005 21:52:32.498930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1005 21:52:32.498967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1005 21:52:32.499051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1005 21:52:32.499100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1005 21:52:32.538477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.538602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.538750       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1005 21:52:32.538794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1005 21:52:32.541591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1005 21:52:32.541670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1005 21:52:32.541775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1005 21:52:32.541825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1005 21:52:32.541874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1005 21:52:32.541926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1005 21:52:33.660408       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1005 21:52:34.584653       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.132183    3523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-235090&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: W1005 21:52:54.186305    3523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.186377    3523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: W1005 21:52:54.480024    3523 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.480099    3523 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 05 21:52:54 pause-235090 kubelet[3523]: E1005 21:52:54.586308    3523 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-235090?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="1.6s"
	Oct 05 21:52:54 pause-235090 kubelet[3523]: I1005 21:52:54.694258    3523 kubelet_node_status.go:70] "Attempting to register node" node="pause-235090"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.968173    3523 kubelet_node_status.go:108] "Node was previously registered" node="pause-235090"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.968289    3523 kubelet_node_status.go:73] "Successfully registered node" node="pause-235090"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.971351    3523 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 05 21:53:01 pause-235090 kubelet[3523]: I1005 21:53:01.972277    3523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.141905    3523 apiserver.go:52] "Watching apiserver"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.147190    3523 topology_manager.go:215] "Topology Admit Handler" podUID="d6f70b29-95e2-4894-95d2-97463d8af989" podNamespace="kube-system" podName="kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.147332    3523 topology_manager.go:215] "Topology Admit Handler" podUID="f45facf4-987f-4d09-bc27-1f5cd7879216" podNamespace="kube-system" podName="kube-proxy-q7sdt"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.147385    3523 topology_manager.go:215] "Topology Admit Handler" podUID="f9362fc7-f2d0-411f-a717-fa70ffafabcb" podNamespace="kube-system" podName="coredns-5dd5756b68-84s28"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.171095    3523 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173425    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6f70b29-95e2-4894-95d2-97463d8af989-cni-cfg\") pod \"kindnet-ntfxs\" (UID: \"d6f70b29-95e2-4894-95d2-97463d8af989\") " pod="kube-system/kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173493    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f45facf4-987f-4d09-bc27-1f5cd7879216-lib-modules\") pod \"kube-proxy-q7sdt\" (UID: \"f45facf4-987f-4d09-bc27-1f5cd7879216\") " pod="kube-system/kube-proxy-q7sdt"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173531    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6f70b29-95e2-4894-95d2-97463d8af989-xtables-lock\") pod \"kindnet-ntfxs\" (UID: \"d6f70b29-95e2-4894-95d2-97463d8af989\") " pod="kube-system/kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173585    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f45facf4-987f-4d09-bc27-1f5cd7879216-xtables-lock\") pod \"kube-proxy-q7sdt\" (UID: \"f45facf4-987f-4d09-bc27-1f5cd7879216\") " pod="kube-system/kube-proxy-q7sdt"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.173623    3523 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6f70b29-95e2-4894-95d2-97463d8af989-lib-modules\") pod \"kindnet-ntfxs\" (UID: \"d6f70b29-95e2-4894-95d2-97463d8af989\") " pod="kube-system/kindnet-ntfxs"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.447813    3523 scope.go:117] "RemoveContainer" containerID="b19ba02b4f4183e5514250b3cf9339b6d6a7f09e9cdf6074a6fed620add18a89"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.449406    3523 scope.go:117] "RemoveContainer" containerID="dcf8d0b0bd6368c741e129574c84951be28bc7489642050fe516df2cddbde796"
	Oct 05 21:53:02 pause-235090 kubelet[3523]: I1005 21:53:02.449836    3523 scope.go:117] "RemoveContainer" containerID="614251a774215c0d87492fb1ddafbd76aa34e09a445262a521eac2db5764cea4"
	Oct 05 21:53:08 pause-235090 kubelet[3523]: I1005 21:53:08.158208    3523 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-235090 -n pause-235090
helpers_test.go:261: (dbg) Run:  kubectl --context pause-235090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (80.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.2137741775.exe start -p stopped-upgrade-760371 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.2137741775.exe start -p stopped-upgrade-760371 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m14.218027481s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.2137741775.exe -p stopped-upgrade-760371 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.2137741775.exe -p stopped-upgrade-760371 stop: (2.377482488s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-760371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-760371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.050258347s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-760371] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-760371 in cluster stopped-upgrade-760371
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-760371" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 22:02:25.942192 1602578 out.go:296] Setting OutFile to fd 1 ...
	I1005 22:02:25.942326 1602578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 22:02:25.942336 1602578 out.go:309] Setting ErrFile to fd 2...
	I1005 22:02:25.942342 1602578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 22:02:25.942644 1602578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 22:02:25.943033 1602578 out.go:303] Setting JSON to false
	I1005 22:02:25.944550 1602578 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27893,"bootTime":1696515453,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 22:02:25.944628 1602578 start.go:138] virtualization:  
	I1005 22:02:25.946941 1602578 out.go:177] * [stopped-upgrade-760371] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 22:02:25.948922 1602578 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 22:02:25.950583 1602578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 22:02:25.949109 1602578 notify.go:220] Checking for updates...
	I1005 22:02:25.954241 1602578 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 22:02:25.955977 1602578 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 22:02:25.959032 1602578 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 22:02:25.961025 1602578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 22:02:25.963296 1602578 config.go:182] Loaded profile config "stopped-upgrade-760371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1005 22:02:25.965575 1602578 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1005 22:02:25.967524 1602578 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 22:02:25.992708 1602578 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 22:02:25.992918 1602578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 22:02:26.084889 1602578 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 22:02:26.074062729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 22:02:26.084996 1602578 docker.go:294] overlay module found
	I1005 22:02:26.087159 1602578 out.go:177] * Using the docker driver based on existing profile
	I1005 22:02:26.089000 1602578 start.go:298] selected driver: docker
	I1005 22:02:26.089022 1602578 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-760371 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.52 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 22:02:26.089128 1602578 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 22:02:26.089862 1602578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 22:02:26.197231 1602578 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 22:02:26.186509639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 22:02:26.197626 1602578 cni.go:84] Creating CNI manager for ""
	I1005 22:02:26.197645 1602578 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 22:02:26.197657 1602578 start_flags.go:321] config:
	{Name:stopped-upgrade-760371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-760371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.52 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1005 22:02:26.200357 1602578 out.go:177] * Starting control plane node stopped-upgrade-760371 in cluster stopped-upgrade-760371
	I1005 22:02:26.202315 1602578 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 22:02:26.205849 1602578 out.go:177] * Pulling base image ...
	I1005 22:02:26.208027 1602578 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1005 22:02:26.208187 1602578 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1005 22:02:26.238502 1602578 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1005 22:02:26.238525 1602578 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1005 22:02:26.280597 1602578 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1005 22:02:26.280748 1602578 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/stopped-upgrade-760371/config.json ...
	I1005 22:02:26.280998 1602578 cache.go:195] Successfully downloaded all kic artifacts
	I1005 22:02:26.281021 1602578 start.go:365] acquiring machines lock for stopped-upgrade-760371: {Name:mk21d5d977ef0efd38627f5b91bf1644d125fe80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.281071 1602578 start.go:369] acquired machines lock for "stopped-upgrade-760371" in 31.409µs
	I1005 22:02:26.281089 1602578 start.go:96] Skipping create...Using existing machine configuration
	I1005 22:02:26.281094 1602578 fix.go:54] fixHost starting: 
	I1005 22:02:26.281414 1602578 cli_runner.go:164] Run: docker container inspect stopped-upgrade-760371 --format={{.State.Status}}
	I1005 22:02:26.281673 1602578 cache.go:107] acquiring lock: {Name:mk0fa157403c63492b15d5a0a2c52e3e839b3715 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.281730 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1005 22:02:26.281738 1602578 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.574µs
	I1005 22:02:26.281765 1602578 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1005 22:02:26.281772 1602578 cache.go:107] acquiring lock: {Name:mkc964c082ca26bad021be16f7f923ee9f32a81f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.281804 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1005 22:02:26.281809 1602578 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 37.94µs
	I1005 22:02:26.281816 1602578 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1005 22:02:26.281822 1602578 cache.go:107] acquiring lock: {Name:mk99ca885724680dd8693e2447f1b981a4c49dc9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.281849 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1005 22:02:26.281853 1602578 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.098µs
	I1005 22:02:26.281860 1602578 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1005 22:02:26.281866 1602578 cache.go:107] acquiring lock: {Name:mk28b497a6e64cfaf2b6ba1eb8f742cd400e4cb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.281893 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1005 22:02:26.281898 1602578 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 32.738µs
	I1005 22:02:26.281909 1602578 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1005 22:02:26.281916 1602578 cache.go:107] acquiring lock: {Name:mkb5763da8bac8f2e59684959ba9a85485218251 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.281943 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1005 22:02:26.281949 1602578 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 32.296µs
	I1005 22:02:26.281956 1602578 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1005 22:02:26.281965 1602578 cache.go:107] acquiring lock: {Name:mkfc6f61869687d6e82a0036c1cd3dc3327f61cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.282024 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1005 22:02:26.282030 1602578 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 66.051µs
	I1005 22:02:26.282036 1602578 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1005 22:02:26.282046 1602578 cache.go:107] acquiring lock: {Name:mkbc7108f01f8e966d83756b2e5d6cef66841b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.282072 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1005 22:02:26.282077 1602578 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 31.959µs
	I1005 22:02:26.282083 1602578 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1005 22:02:26.282091 1602578 cache.go:107] acquiring lock: {Name:mk1d6f6052102b5b4c1a02f29e9d7ee38e5131c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1005 22:02:26.282121 1602578 cache.go:115] /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1005 22:02:26.282126 1602578 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 35.553µs
	I1005 22:02:26.282132 1602578 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1005 22:02:26.282138 1602578 cache.go:87] Successfully saved all images to host disk.
	I1005 22:02:26.300912 1602578 fix.go:102] recreateIfNeeded on stopped-upgrade-760371: state=Stopped err=<nil>
	W1005 22:02:26.300958 1602578 fix.go:128] unexpected machine state, will restart: <nil>
	I1005 22:02:26.305109 1602578 out.go:177] * Restarting existing docker container for "stopped-upgrade-760371" ...
	I1005 22:02:26.307248 1602578 cli_runner.go:164] Run: docker start stopped-upgrade-760371
	I1005 22:02:26.669698 1602578 cli_runner.go:164] Run: docker container inspect stopped-upgrade-760371 --format={{.State.Status}}
	I1005 22:02:26.716516 1602578 kic.go:426] container "stopped-upgrade-760371" state is running.
	I1005 22:02:26.716950 1602578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-760371
	I1005 22:02:26.747788 1602578 profile.go:148] Saving config to /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/stopped-upgrade-760371/config.json ...
	I1005 22:02:26.748015 1602578 machine.go:88] provisioning docker machine ...
	I1005 22:02:26.748054 1602578 ubuntu.go:169] provisioning hostname "stopped-upgrade-760371"
	I1005 22:02:26.748111 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:26.777676 1602578 main.go:141] libmachine: Using SSH client type: native
	I1005 22:02:26.778241 1602578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34293 <nil> <nil>}
	I1005 22:02:26.778263 1602578 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-760371 && echo "stopped-upgrade-760371" | sudo tee /etc/hostname
	I1005 22:02:26.778915 1602578 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1005 22:02:29.991651 1602578 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-760371
	
	I1005 22:02:29.991809 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:30.031057 1602578 main.go:141] libmachine: Using SSH client type: native
	I1005 22:02:30.031504 1602578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34293 <nil> <nil>}
	I1005 22:02:30.031530 1602578 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-760371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-760371/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-760371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1005 22:02:30.216069 1602578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1005 22:02:30.216100 1602578 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17363-1448442/.minikube CaCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17363-1448442/.minikube}
	I1005 22:02:30.216148 1602578 ubuntu.go:177] setting up certificates
	I1005 22:02:30.216159 1602578 provision.go:83] configureAuth start
	I1005 22:02:30.216242 1602578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-760371
	I1005 22:02:30.254753 1602578 provision.go:138] copyHostCerts
	I1005 22:02:30.254831 1602578 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem, removing ...
	I1005 22:02:30.254841 1602578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem
	I1005 22:02:30.254929 1602578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.pem (1082 bytes)
	I1005 22:02:30.255067 1602578 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem, removing ...
	I1005 22:02:30.255072 1602578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem
	I1005 22:02:30.255106 1602578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/cert.pem (1123 bytes)
	I1005 22:02:30.255187 1602578 exec_runner.go:144] found /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem, removing ...
	I1005 22:02:30.255192 1602578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem
	I1005 22:02:30.255216 1602578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17363-1448442/.minikube/key.pem (1675 bytes)
	I1005 22:02:30.255273 1602578 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-760371 san=[192.168.70.52 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-760371]
	I1005 22:02:30.935118 1602578 provision.go:172] copyRemoteCerts
	I1005 22:02:30.935204 1602578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1005 22:02:30.935267 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:30.966049 1602578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34293 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/stopped-upgrade-760371/id_rsa Username:docker}
	I1005 22:02:31.075861 1602578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1005 22:02:31.108980 1602578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1005 22:02:31.136667 1602578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1005 22:02:31.177534 1602578 provision.go:86] duration metric: configureAuth took 961.357634ms
	I1005 22:02:31.177621 1602578 ubuntu.go:193] setting minikube options for container-runtime
	I1005 22:02:31.177859 1602578 config.go:182] Loaded profile config "stopped-upgrade-760371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1005 22:02:31.178040 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:31.202860 1602578 main.go:141] libmachine: Using SSH client type: native
	I1005 22:02:31.203338 1602578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34293 <nil> <nil>}
	I1005 22:02:31.203355 1602578 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1005 22:02:31.671535 1602578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1005 22:02:31.671561 1602578 machine.go:91] provisioned docker machine in 4.923530176s
	I1005 22:02:31.671572 1602578 start.go:300] post-start starting for "stopped-upgrade-760371" (driver="docker")
	I1005 22:02:31.671583 1602578 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1005 22:02:31.671652 1602578 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1005 22:02:31.671695 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:31.691755 1602578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34293 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/stopped-upgrade-760371/id_rsa Username:docker}
	I1005 22:02:31.796411 1602578 ssh_runner.go:195] Run: cat /etc/os-release
	I1005 22:02:31.800698 1602578 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1005 22:02:31.800726 1602578 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1005 22:02:31.800737 1602578 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1005 22:02:31.800745 1602578 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1005 22:02:31.800755 1602578 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/addons for local assets ...
	I1005 22:02:31.800816 1602578 filesync.go:126] Scanning /home/jenkins/minikube-integration/17363-1448442/.minikube/files for local assets ...
	I1005 22:02:31.800904 1602578 filesync.go:149] local asset: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem -> 14537862.pem in /etc/ssl/certs
	I1005 22:02:31.801024 1602578 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1005 22:02:31.810287 1602578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/ssl/certs/14537862.pem --> /etc/ssl/certs/14537862.pem (1708 bytes)
	I1005 22:02:31.833894 1602578 start.go:303] post-start completed in 162.304893ms
	I1005 22:02:31.833996 1602578 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 22:02:31.834051 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:31.858787 1602578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34293 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/stopped-upgrade-760371/id_rsa Username:docker}
	I1005 22:02:31.957949 1602578 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1005 22:02:31.963813 1602578 fix.go:56] fixHost completed within 5.682704511s
	I1005 22:02:31.963838 1602578 start.go:83] releasing machines lock for "stopped-upgrade-760371", held for 5.682758245s
	I1005 22:02:31.963909 1602578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-760371
	I1005 22:02:31.983549 1602578 ssh_runner.go:195] Run: cat /version.json
	I1005 22:02:31.983605 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:31.983831 1602578 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1005 22:02:31.983906 1602578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-760371
	I1005 22:02:32.006288 1602578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34293 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/stopped-upgrade-760371/id_rsa Username:docker}
	I1005 22:02:32.013475 1602578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34293 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/stopped-upgrade-760371/id_rsa Username:docker}
	W1005 22:02:32.247484 1602578 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1005 22:02:32.247582 1602578 ssh_runner.go:195] Run: systemctl --version
	I1005 22:02:32.254296 1602578 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1005 22:02:32.362844 1602578 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1005 22:02:32.368884 1602578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 22:02:32.394341 1602578 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1005 22:02:32.394418 1602578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1005 22:02:32.427900 1602578 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1005 22:02:32.427924 1602578 start.go:469] detecting cgroup driver to use...
	I1005 22:02:32.427956 1602578 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1005 22:02:32.428006 1602578 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1005 22:02:32.458547 1602578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1005 22:02:32.472274 1602578 docker.go:197] disabling cri-docker service (if available) ...
	I1005 22:02:32.472351 1602578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1005 22:02:32.485658 1602578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1005 22:02:32.497664 1602578 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1005 22:02:32.510559 1602578 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1005 22:02:32.510641 1602578 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1005 22:02:32.620629 1602578 docker.go:213] disabling docker service ...
	I1005 22:02:32.620735 1602578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1005 22:02:32.634534 1602578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1005 22:02:32.647732 1602578 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1005 22:02:32.758811 1602578 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1005 22:02:32.877882 1602578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1005 22:02:32.891223 1602578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1005 22:02:32.910101 1602578 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1005 22:02:32.910212 1602578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1005 22:02:32.927642 1602578 out.go:177] 
	W1005 22:02:32.931298 1602578 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1005 22:02:32.931326 1602578 out.go:239] * 
	* 
	W1005 22:02:32.932284 1602578 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1005 22:02:32.935011 1602578 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-760371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (83.65s)

                                                
                                    

Test pass (265/301)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.81
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.22
10 TestDownloadOnly/v1.28.2/json-events 11.9
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.6
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
25 TestAddons/Setup 159.94
27 TestAddons/parallel/Registry 15.81
29 TestAddons/parallel/InspektorGadget 10.95
30 TestAddons/parallel/MetricsServer 5.9
33 TestAddons/parallel/CSI 62.35
34 TestAddons/parallel/Headlamp 13.82
35 TestAddons/parallel/CloudSpanner 6.13
36 TestAddons/parallel/LocalPath 10.69
39 TestAddons/serial/GCPAuth/Namespaces 0.17
40 TestAddons/StoppedEnableDisable 12.39
41 TestCertOptions 37.41
42 TestCertExpiration 249.08
44 TestForceSystemdFlag 43.57
45 TestForceSystemdEnv 38.91
51 TestErrorSpam/setup 30.57
52 TestErrorSpam/start 0.86
53 TestErrorSpam/status 1.08
54 TestErrorSpam/pause 1.91
55 TestErrorSpam/unpause 2.14
56 TestErrorSpam/stop 1.5
59 TestFunctional/serial/CopySyncFile 0
60 TestFunctional/serial/StartWithProxy 78.92
61 TestFunctional/serial/AuditLog 0
62 TestFunctional/serial/SoftStart 43.23
63 TestFunctional/serial/KubeContext 0.06
64 TestFunctional/serial/KubectlGetPods 0.11
67 TestFunctional/serial/CacheCmd/cache/add_remote 4.1
68 TestFunctional/serial/CacheCmd/cache/add_local 1.09
69 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
70 TestFunctional/serial/CacheCmd/cache/list 0.06
71 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
72 TestFunctional/serial/CacheCmd/cache/cache_reload 2.45
73 TestFunctional/serial/CacheCmd/cache/delete 0.12
74 TestFunctional/serial/MinikubeKubectlCmd 0.15
75 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
76 TestFunctional/serial/ExtraConfig 34.15
77 TestFunctional/serial/ComponentHealth 0.11
78 TestFunctional/serial/LogsCmd 1.94
79 TestFunctional/serial/LogsFileCmd 1.95
80 TestFunctional/serial/InvalidService 4.82
82 TestFunctional/parallel/ConfigCmd 0.52
83 TestFunctional/parallel/DashboardCmd 13.21
84 TestFunctional/parallel/DryRun 0.64
85 TestFunctional/parallel/InternationalLanguage 0.3
86 TestFunctional/parallel/StatusCmd 1.12
90 TestFunctional/parallel/ServiceCmdConnect 7.73
91 TestFunctional/parallel/AddonsCmd 0.14
92 TestFunctional/parallel/PersistentVolumeClaim 25.54
94 TestFunctional/parallel/SSHCmd 0.87
95 TestFunctional/parallel/CpCmd 1.55
97 TestFunctional/parallel/FileSync 0.46
98 TestFunctional/parallel/CertSync 2.04
102 TestFunctional/parallel/NodeLabels 0.09
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
106 TestFunctional/parallel/License 0.44
107 TestFunctional/parallel/Version/short 0.09
108 TestFunctional/parallel/Version/components 1.08
109 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
110 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
111 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
112 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
113 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
114 TestFunctional/parallel/ImageCommands/Setup 2.72
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.32
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.83
119 TestFunctional/parallel/ServiceCmd/DeployApp 11.43
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.08
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.89
122 TestFunctional/parallel/ServiceCmd/List 0.46
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
125 TestFunctional/parallel/ServiceCmd/Format 0.49
126 TestFunctional/parallel/ServiceCmd/URL 0.47
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.8
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.61
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.04
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.45
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
143 TestFunctional/parallel/ProfileCmd/profile_list 0.44
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
145 TestFunctional/parallel/MountCmd/any-port 8.11
146 TestFunctional/parallel/MountCmd/specific-port 1.78
147 TestFunctional/parallel/MountCmd/VerifyCleanup 2.45
148 TestFunctional/delete_addon-resizer_images 0.09
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
154 TestIngressAddonLegacy/StartLegacyK8sCluster 92.45
156 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.01
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
161 TestJSONOutput/start/Command 77.46
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.87
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.75
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 5.9
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.24
186 TestKicCustomNetwork/create_custom_network 44.68
187 TestKicCustomNetwork/use_default_bridge_network 33.79
188 TestKicExistingNetwork 35.62
189 TestKicCustomSubnet 35.88
190 TestKicStaticIP 34.1
191 TestMainNoArgs 0.05
192 TestMinikubeProfile 67.57
195 TestMountStart/serial/StartWithMountFirst 7.02
196 TestMountStart/serial/VerifyMountFirst 0.28
197 TestMountStart/serial/StartWithMountSecond 7.31
198 TestMountStart/serial/VerifyMountSecond 0.28
199 TestMountStart/serial/DeleteFirst 1.69
200 TestMountStart/serial/VerifyMountPostDelete 0.29
201 TestMountStart/serial/Stop 1.22
202 TestMountStart/serial/RestartStopped 8.43
203 TestMountStart/serial/VerifyMountPostStop 0.28
206 TestMultiNode/serial/FreshStart2Nodes 127.08
207 TestMultiNode/serial/DeployApp2Nodes 5.74
209 TestMultiNode/serial/AddNode 47.81
210 TestMultiNode/serial/ProfileList 0.36
211 TestMultiNode/serial/CopyFile 11.11
212 TestMultiNode/serial/StopNode 2.37
213 TestMultiNode/serial/StartAfterStop 12.47
214 TestMultiNode/serial/RestartKeepsNodes 123.67
215 TestMultiNode/serial/DeleteNode 5.15
216 TestMultiNode/serial/StopMultiNode 24.08
217 TestMultiNode/serial/RestartMultiNode 84.9
218 TestMultiNode/serial/ValidateNameConflict 36.26
223 TestPreload 177.34
225 TestScheduledStopUnix 110.03
228 TestInsufficientStorage 13.45
231 TestKubernetesUpgrade 425.52
234 TestPause/serial/Start 57.71
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
237 TestNoKubernetes/serial/StartWithK8s 42.52
238 TestNoKubernetes/serial/StartWithStopK8s 6.84
239 TestNoKubernetes/serial/Start 9.79
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
242 TestNoKubernetes/serial/ProfileList 1.15
243 TestNoKubernetes/serial/Stop 1.24
244 TestNoKubernetes/serial/StartNoArgs 7.72
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
253 TestNetworkPlugins/group/false 4.08
257 TestStoppedBinaryUpgrade/Setup 2.14
266 TestNetworkPlugins/group/auto/Start 86.76
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
268 TestNetworkPlugins/group/kindnet/Start 56.86
269 TestNetworkPlugins/group/auto/KubeletFlags 0.38
270 TestNetworkPlugins/group/auto/NetCatPod 12.5
271 TestNetworkPlugins/group/auto/DNS 0.29
272 TestNetworkPlugins/group/auto/Localhost 0.22
273 TestNetworkPlugins/group/auto/HairPin 0.22
274 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
275 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
276 TestNetworkPlugins/group/kindnet/NetCatPod 12.34
277 TestNetworkPlugins/group/calico/Start 74.29
278 TestNetworkPlugins/group/kindnet/DNS 0.32
279 TestNetworkPlugins/group/kindnet/Localhost 0.22
280 TestNetworkPlugins/group/kindnet/HairPin 0.24
281 TestNetworkPlugins/group/custom-flannel/Start 75.26
282 TestNetworkPlugins/group/calico/ControllerPod 5.07
283 TestNetworkPlugins/group/calico/KubeletFlags 0.35
284 TestNetworkPlugins/group/calico/NetCatPod 12.76
285 TestNetworkPlugins/group/calico/DNS 0.31
286 TestNetworkPlugins/group/calico/Localhost 0.43
287 TestNetworkPlugins/group/calico/HairPin 0.3
288 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.46
289 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.66
290 TestNetworkPlugins/group/enable-default-cni/Start 94.24
291 TestNetworkPlugins/group/custom-flannel/DNS 0.32
292 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
293 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
294 TestNetworkPlugins/group/flannel/Start 66.89
295 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
296 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.4
297 TestNetworkPlugins/group/flannel/ControllerPod 5.04
298 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
299 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
300 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
301 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
302 TestNetworkPlugins/group/flannel/NetCatPod 10.34
303 TestNetworkPlugins/group/flannel/DNS 0.31
304 TestNetworkPlugins/group/flannel/Localhost 0.28
305 TestNetworkPlugins/group/flannel/HairPin 0.31
306 TestNetworkPlugins/group/bridge/Start 88.4
308 TestStartStop/group/old-k8s-version/serial/FirstStart 136.17
309 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
310 TestNetworkPlugins/group/bridge/NetCatPod 11.34
311 TestNetworkPlugins/group/bridge/DNS 0.2
312 TestNetworkPlugins/group/bridge/Localhost 0.19
313 TestNetworkPlugins/group/bridge/HairPin 0.18
315 TestStartStop/group/no-preload/serial/FirstStart 67.48
316 TestStartStop/group/old-k8s-version/serial/DeployApp 9.72
317 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.59
318 TestStartStop/group/old-k8s-version/serial/Stop 12.47
319 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
320 TestStartStop/group/old-k8s-version/serial/SecondStart 449.28
321 TestStartStop/group/no-preload/serial/DeployApp 8.54
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
323 TestStartStop/group/no-preload/serial/Stop 12.3
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
325 TestStartStop/group/no-preload/serial/SecondStart 636.59
326 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
328 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
329 TestStartStop/group/old-k8s-version/serial/Pause 3.76
331 TestStartStop/group/embed-certs/serial/FirstStart 49.38
332 TestStartStop/group/embed-certs/serial/DeployApp 9.5
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.3
334 TestStartStop/group/embed-certs/serial/Stop 12.14
335 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
336 TestStartStop/group/embed-certs/serial/SecondStart 343.88
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
340 TestStartStop/group/no-preload/serial/Pause 3.57
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.95
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.51
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
347 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 347.23
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 18.03
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
350 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
351 TestStartStop/group/embed-certs/serial/Pause 3.57
353 TestStartStop/group/newest-cni/serial/FirstStart 43.99
354 TestStartStop/group/newest-cni/serial/DeployApp 0
355 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
356 TestStartStop/group/newest-cni/serial/Stop 1.32
357 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/newest-cni/serial/SecondStart 31.56
359 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
362 TestStartStop/group/newest-cni/serial/Pause 3.29
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.04
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
365 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
366 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.36
x
+
TestDownloadOnly/v1.16.0/json-events (11.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-762455 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-762455 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.804840585s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-762455
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-762455: exit status 85 (221.279686ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-762455 | jenkins | v1.31.2 | 05 Oct 23 21:14 UTC |          |
	|         | -p download-only-762455        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:14:48
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:14:48.837791 1453791 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:14:48.838018 1453791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:14:48.838030 1453791 out.go:309] Setting ErrFile to fd 2...
	I1005 21:14:48.838036 1453791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:14:48.838352 1453791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	W1005 21:14:48.838529 1453791 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-1448442/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-1448442/.minikube/config/config.json: no such file or directory
	I1005 21:14:48.838942 1453791 out.go:303] Setting JSON to true
	I1005 21:14:48.839929 1453791 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25036,"bootTime":1696515453,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:14:48.840010 1453791 start.go:138] virtualization:  
	I1005 21:14:48.843127 1453791 out.go:97] [download-only-762455] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:14:48.845218 1453791 out.go:169] MINIKUBE_LOCATION=17363
	W1005 21:14:48.843413 1453791 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball: no such file or directory
	I1005 21:14:48.843492 1453791 notify.go:220] Checking for updates...
	I1005 21:14:48.848840 1453791 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:14:48.850470 1453791 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:14:48.852033 1453791 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:14:48.853902 1453791 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1005 21:14:48.857115 1453791 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 21:14:48.857439 1453791 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:14:48.882536 1453791 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:14:48.882619 1453791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:14:48.978096 1453791 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-05 21:14:48.968088621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:14:48.978205 1453791 docker.go:294] overlay module found
	I1005 21:14:48.980267 1453791 out.go:97] Using the docker driver based on user configuration
	I1005 21:14:48.980296 1453791 start.go:298] selected driver: docker
	I1005 21:14:48.980303 1453791 start.go:902] validating driver "docker" against <nil>
	I1005 21:14:48.980413 1453791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:14:49.047526 1453791 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-05 21:14:49.038329233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:14:49.047696 1453791 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1005 21:14:49.047967 1453791 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1005 21:14:49.048120 1453791 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1005 21:14:49.049888 1453791 out.go:169] Using Docker driver with root privileges
	I1005 21:14:49.051585 1453791 cni.go:84] Creating CNI manager for ""
	I1005 21:14:49.051602 1453791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:14:49.051612 1453791 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1005 21:14:49.051631 1453791 start_flags.go:321] config:
	{Name:download-only-762455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-762455 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:14:49.053508 1453791 out.go:97] Starting control plane node download-only-762455 in cluster download-only-762455
	I1005 21:14:49.053528 1453791 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:14:49.055130 1453791 out.go:97] Pulling base image ...
	I1005 21:14:49.055151 1453791 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1005 21:14:49.055285 1453791 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:14:49.072369 1453791 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:14:49.072550 1453791 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 21:14:49.072652 1453791 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:14:49.133695 1453791 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1005 21:14:49.133727 1453791 cache.go:57] Caching tarball of preloaded images
	I1005 21:14:49.134300 1453791 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1005 21:14:49.136702 1453791 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1005 21:14:49.136728 1453791 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1005 21:14:49.246741 1453791 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1005 21:14:53.934866 1453791 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 21:14:58.731609 1453791 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1005 21:14:58.731715 1453791 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-762455"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (11.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-762455 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-762455 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.90180224s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (11.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-762455
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-762455: exit status 85 (74.055339ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-762455 | jenkins | v1.31.2 | 05 Oct 23 21:14 UTC |          |
	|         | -p download-only-762455        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-762455 | jenkins | v1.31.2 | 05 Oct 23 21:15 UTC |          |
	|         | -p download-only-762455        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/05 21:15:00
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1005 21:15:00.870333 1453864 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:15:00.870563 1453864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:15:00.870574 1453864 out.go:309] Setting ErrFile to fd 2...
	I1005 21:15:00.870580 1453864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:15:00.870916 1453864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	W1005 21:15:00.871065 1453864 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17363-1448442/.minikube/config/config.json: open /home/jenkins/minikube-integration/17363-1448442/.minikube/config/config.json: no such file or directory
	I1005 21:15:00.871312 1453864 out.go:303] Setting JSON to true
	I1005 21:15:00.872372 1453864 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25048,"bootTime":1696515453,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:15:00.872457 1453864 start.go:138] virtualization:  
	I1005 21:15:00.911728 1453864 out.go:97] [download-only-762455] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:15:00.943986 1453864 out.go:169] MINIKUBE_LOCATION=17363
	I1005 21:15:00.912086 1453864 notify.go:220] Checking for updates...
	I1005 21:15:01.024340 1453864 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:15:01.064163 1453864 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:15:01.088602 1453864 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:15:01.120663 1453864 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1005 21:15:01.185013 1453864 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1005 21:15:01.185635 1453864 config.go:182] Loaded profile config "download-only-762455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1005 21:15:01.185692 1453864 start.go:810] api.Load failed for download-only-762455: filestore "download-only-762455": Docker machine "download-only-762455" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 21:15:01.185818 1453864 driver.go:378] Setting default libvirt URI to qemu:///system
	W1005 21:15:01.185853 1453864 start.go:810] api.Load failed for download-only-762455: filestore "download-only-762455": Docker machine "download-only-762455" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1005 21:15:01.214708 1453864 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:15:01.214800 1453864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:15:01.283371 1453864 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:15:01.273094842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:15:01.283483 1453864 docker.go:294] overlay module found
	I1005 21:15:01.313571 1453864 out.go:97] Using the docker driver based on existing profile
	I1005 21:15:01.313624 1453864 start.go:298] selected driver: docker
	I1005 21:15:01.313631 1453864 start.go:902] validating driver "docker" against &{Name:download-only-762455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-762455 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:15:01.313833 1453864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:15:01.384000 1453864 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-05 21:15:01.372395565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:15:01.384478 1453864 cni.go:84] Creating CNI manager for ""
	I1005 21:15:01.384491 1453864 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1005 21:15:01.384503 1453864 start_flags.go:321] config:
	{Name:download-only-762455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-762455 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:15:01.408354 1453864 out.go:97] Starting control plane node download-only-762455 in cluster download-only-762455
	I1005 21:15:01.408391 1453864 cache.go:122] Beginning downloading kic base image for docker with crio
	I1005 21:15:01.440683 1453864 out.go:97] Pulling base image ...
	I1005 21:15:01.440721 1453864 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:15:01.440793 1453864 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1005 21:15:01.459977 1453864 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1005 21:15:01.460092 1453864 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1005 21:15:01.460121 1453864 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1005 21:15:01.460126 1453864 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1005 21:15:01.460134 1453864 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1005 21:15:01.554167 1453864 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1005 21:15:01.554195 1453864 cache.go:57] Caching tarball of preloaded images
	I1005 21:15:01.557367 1453864 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1005 21:15:01.568432 1453864 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1005 21:15:01.568479 1453864 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	I1005 21:15:01.685103 1453864 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:ec283948b04358f92432bdd325b7fb0b -> /home/jenkins/minikube-integration/17363-1448442/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-762455"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-762455
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-658961 --alsologtostderr --binary-mirror http://127.0.0.1:40319 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-658961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-658961
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:926: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-792068
addons_test.go:926: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-792068: exit status 85 (60.20758ms)

                                                
                                                
-- stdout --
	* Profile "addons-792068" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-792068"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:937: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-792068
addons_test.go:937: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-792068: exit status 85 (90.261335ms)

                                                
                                                
-- stdout --
	* Profile "addons-792068" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-792068"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (159.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-792068 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-792068 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m39.94379974s)
--- PASS: TestAddons/Setup (159.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 60.787098ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vpzch" [36e6cf6c-82e4-440d-b58b-99058332f62a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015558217s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vz2q7" [ab807e06-35f3-4c48-9558-132b23eea60e] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01593098s
addons_test.go:338: (dbg) Run:  kubectl --context addons-792068 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-792068 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-792068 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.527142232s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 ip
2023/10/05 21:18:09 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-s6hwf" [c040d10e-def3-4753-9e44-1ac6c3517028] Running
addons_test.go:836: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012010787s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-792068
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-792068: (5.936418913s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 9.214009ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xlt65" [f7b11110-7006-4910-b415-20dd0a6b4c4e] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.022343429s
addons_test.go:413: (dbg) Run:  kubectl --context addons-792068 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: csi-hostpath-driver pods stabilized in 13.187928ms
addons_test.go:562: (dbg) Run:  kubectl --context addons-792068 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-792068 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7fe3fb71-aeaf-4f95-b937-a8d9e4231ece] Pending
helpers_test.go:344: "task-pv-pod" [7fe3fb71-aeaf-4f95-b937-a8d9e4231ece] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7fe3fb71-aeaf-4f95-b937-a8d9e4231ece] Running
addons_test.go:577: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.017641842s
addons_test.go:582: (dbg) Run:  kubectl --context addons-792068 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-792068 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-792068 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-792068 delete pod task-pv-pod
addons_test.go:592: (dbg) Done: kubectl --context addons-792068 delete pod task-pv-pod: (1.38507547s)
addons_test.go:598: (dbg) Run:  kubectl --context addons-792068 delete pvc hpvc
addons_test.go:604: (dbg) Run:  kubectl --context addons-792068 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:614: (dbg) Run:  kubectl --context addons-792068 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:619: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6524fc4d-6b64-4df4-8789-3145ea854049] Pending
helpers_test.go:344: "task-pv-pod-restore" [6524fc4d-6b64-4df4-8789-3145ea854049] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6524fc4d-6b64-4df4-8789-3145ea854049] Running
addons_test.go:619: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.032429599s
addons_test.go:624: (dbg) Run:  kubectl --context addons-792068 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Run:  kubectl --context addons-792068 delete pvc hpvc-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-792068 delete volumesnapshot new-snapshot-demo
addons_test.go:636: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:636: (dbg) Done: out/minikube-linux-arm64 -p addons-792068 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.86049552s)
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:822: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-792068 --alsologtostderr -v=1
addons_test.go:822: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-792068 --alsologtostderr -v=1: (1.786156726s)
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-fqfmq" [5c19154f-65c8-4cbe-af55-8ad6aa866b3a] Pending
helpers_test.go:344: "headlamp-58b88cff49-fqfmq" [5c19154f-65c8-4cbe-af55-8ad6aa866b3a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-fqfmq" [5c19154f-65c8-4cbe-af55-8ad6aa866b3a] Running
addons_test.go:827: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.028939201s
--- PASS: TestAddons/parallel/Headlamp (13.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-wwcdr" [90bce1e3-dc5e-4206-8af5-1249ff33bc62] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:855: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.027401847s
addons_test.go:858: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-792068
addons_test.go:858: (dbg) Done: out/minikube-linux-arm64 addons disable cloud-spanner -p addons-792068: (1.092407242s)
--- PASS: TestAddons/parallel/CloudSpanner (6.13s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:871: (dbg) Run:  kubectl --context addons-792068 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:877: (dbg) Run:  kubectl --context addons-792068 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:881: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-792068 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6a93786e-500c-4c20-839c-3f0e93ea78fd] Pending
helpers_test.go:344: "test-local-path" [6a93786e-500c-4c20-839c-3f0e93ea78fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6a93786e-500c-4c20-839c-3f0e93ea78fd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6a93786e-500c-4c20-839c-3f0e93ea78fd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:884: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.035441561s
addons_test.go:889: (dbg) Run:  kubectl --context addons-792068 get pvc test-pvc -o=json
addons_test.go:898: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 ssh "cat /opt/local-path-provisioner/pvc-85bf44a9-7629-4bdb-ac2c-0a5f3af53dd1_default_test-pvc/file1"
addons_test.go:910: (dbg) Run:  kubectl --context addons-792068 delete pod test-local-path
addons_test.go:914: (dbg) Run:  kubectl --context addons-792068 delete pvc test-pvc
addons_test.go:918: (dbg) Run:  out/minikube-linux-arm64 -p addons-792068 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:648: (dbg) Run:  kubectl --context addons-792068 create ns new-namespace
addons_test.go:662: (dbg) Run:  kubectl --context addons-792068 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-792068
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-792068: (12.091568948s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-792068
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-792068
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-792068
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (37.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-656717 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-656717 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.445770298s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-656717 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-656717 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-656717 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-656717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-656717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-656717: (2.236312956s)
--- PASS: TestCertOptions (37.41s)

                                                
                                    
x
+
TestCertExpiration (249.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-940824 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1005 21:53:37.312159 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-940824 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.438249144s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-940824 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-940824 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (24.550344187s)
helpers_test.go:175: Cleaning up "cert-expiration-940824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-940824
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-940824: (2.085981625s)
--- PASS: TestCertExpiration (249.08s)

                                                
                                    
x
+
TestForceSystemdFlag (43.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-591577 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-591577 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.652441479s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-591577 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-591577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-591577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-591577: (2.520121699s)
--- PASS: TestForceSystemdFlag (43.57s)

                                                
                                    
x
+
TestForceSystemdEnv (38.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-782488 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1005 21:52:54.597951 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-782488 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.452396522s)
helpers_test.go:175: Cleaning up "force-systemd-env-782488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-782488
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-782488: (2.45317297s)
--- PASS: TestForceSystemdEnv (38.91s)

                                                
                                    
x
+
TestErrorSpam/setup (30.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-318805 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-318805 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-318805 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-318805 --driver=docker  --container-runtime=crio: (30.574376885s)
--- PASS: TestErrorSpam/setup (30.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 unpause
--- PASS: TestErrorSpam/unpause (2.14s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 stop: (1.30650807s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-318805 --log_dir /tmp/nospam-318805 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17363-1448442/.minikube/files/etc/test/nested/copy/1453786/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-322912 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1005 21:22:54.598936 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:54.605984 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:54.616264 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:54.636630 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:54.677305 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:54.757687 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:54.918354 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:55.238904 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:55.879931 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:57.160145 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:22:59.721019 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:23:04.841743 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:23:15.082538 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:23:35.563116 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-322912 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.91497636s)
--- PASS: TestFunctional/serial/StartWithProxy (78.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-322912 --alsologtostderr -v=8
E1005 21:24:16.523287 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-322912 --alsologtostderr -v=8: (43.226495629s)
functional_test.go:659: soft start took 43.227120382s for "functional-322912" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-322912 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 cache add registry.k8s.io/pause:3.1: (1.408129097s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 cache add registry.k8s.io/pause:3.3: (1.464663553s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 cache add registry.k8s.io/pause:latest: (1.224577794s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-322912 /tmp/TestFunctionalserialCacheCmdcacheadd_local3674715390/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cache add minikube-local-cache-test:functional-322912
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cache delete minikube-local-cache-test:functional-322912
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-322912
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (494.164159ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 cache reload: (1.27650191s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 kubectl -- --context functional-322912 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-322912 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-322912 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-322912 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.152435539s)
functional_test.go:757: restart took 34.152569332s for "functional-322912" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-322912 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 logs
E1005 21:25:38.443912 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 logs: (1.940586192s)
--- PASS: TestFunctional/serial/LogsCmd (1.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 logs --file /tmp/TestFunctionalserialLogsFileCmd822648826/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 logs --file /tmp/TestFunctionalserialLogsFileCmd822648826/001/logs.txt: (1.946076274s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-322912 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-322912
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-322912: exit status 115 (646.383735ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32225 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-322912 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 config get cpus: exit status 14 (98.76834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 config get cpus: exit status 14 (65.951623ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-322912 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-322912 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1480826: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-322912 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-322912 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (229.287557ms)

                                                
                                                
-- stdout --
	* [functional-322912] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:26:35.967682 1480231 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:26:35.967935 1480231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:26:35.967964 1480231 out.go:309] Setting ErrFile to fd 2...
	I1005 21:26:35.967985 1480231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:26:35.968279 1480231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:26:35.968698 1480231 out.go:303] Setting JSON to false
	I1005 21:26:35.969941 1480231 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25743,"bootTime":1696515453,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:26:35.970064 1480231 start.go:138] virtualization:  
	I1005 21:26:35.972512 1480231 out.go:177] * [functional-322912] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:26:35.975034 1480231 notify.go:220] Checking for updates...
	I1005 21:26:35.975115 1480231 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:26:35.981969 1480231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:26:35.984157 1480231 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:26:35.986317 1480231 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:26:35.988088 1480231 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:26:35.989656 1480231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:26:35.992016 1480231 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:26:35.992751 1480231 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:26:36.035560 1480231 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:26:36.035684 1480231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:26:36.125732 1480231 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-05 21:26:36.11479971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:26:36.125849 1480231 docker.go:294] overlay module found
	I1005 21:26:36.129187 1480231 out.go:177] * Using the docker driver based on existing profile
	I1005 21:26:36.131280 1480231 start.go:298] selected driver: docker
	I1005 21:26:36.131304 1480231 start.go:902] validating driver "docker" against &{Name:functional-322912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-322912 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:26:36.131444 1480231 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:26:36.134277 1480231 out.go:177] 
	W1005 21:26:36.136167 1480231 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1005 21:26:36.137933 1480231 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-322912 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-322912 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-322912 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (298.912634ms)

                                                
                                                
-- stdout --
	* [functional-322912] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:26:36.622654 1480386 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:26:36.622887 1480386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:26:36.622899 1480386 out.go:309] Setting ErrFile to fd 2...
	I1005 21:26:36.622906 1480386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:26:36.623266 1480386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:26:36.623621 1480386 out.go:303] Setting JSON to false
	I1005 21:26:36.627253 1480386 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25744,"bootTime":1696515453,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:26:36.627336 1480386 start.go:138] virtualization:  
	I1005 21:26:36.631000 1480386 out.go:177] * [functional-322912] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1005 21:26:36.632910 1480386 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:26:36.634546 1480386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:26:36.633112 1480386 notify.go:220] Checking for updates...
	I1005 21:26:36.639382 1480386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:26:36.641194 1480386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:26:36.643228 1480386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:26:36.644916 1480386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:26:36.646983 1480386 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:26:36.647504 1480386 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:26:36.682814 1480386 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:26:36.682902 1480386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:26:36.828344 1480386 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-05 21:26:36.81477559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:26:36.828452 1480386 docker.go:294] overlay module found
	I1005 21:26:36.831076 1480386 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1005 21:26:36.832882 1480386 start.go:298] selected driver: docker
	I1005 21:26:36.832900 1480386 start.go:902] validating driver "docker" against &{Name:functional-322912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-322912 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1005 21:26:36.833022 1480386 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:26:36.838329 1480386 out.go:177] 
	W1005 21:26:36.840196 1480386 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1005 21:26:36.842546 1480386 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-322912 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-322912 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qdmtm" [62302e29-cf0c-4428-9ce0-083a0db8720e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qdmtm" [62302e29-cf0c-4428-9ce0-083a0db8720e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.015143616s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32701
functional_test.go:1674: http://192.168.49.2:32701: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-qdmtm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32701
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3e4a9757-c8bc-4f87-8f28-c837fe916ecc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012428153s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-322912 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-322912 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-322912 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-322912 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e5c7d9e-15f9-4998-b2a0-443a7d4d057e] Pending
helpers_test.go:344: "sp-pod" [0e5c7d9e-15f9-4998-b2a0-443a7d4d057e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0e5c7d9e-15f9-4998-b2a0-443a7d4d057e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.014989892s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-322912 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-322912 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-322912 delete -f testdata/storage-provisioner/pod.yaml: (1.013334886s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-322912 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d06fe299-c096-486a-8934-ea4cb8bb8ba6] Pending
helpers_test.go:344: "sp-pod" [d06fe299-c096-486a-8934-ea4cb8bb8ba6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d06fe299-c096-486a-8934-ea4cb8bb8ba6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.01658048s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-322912 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.54s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh -n functional-322912 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 cp functional-322912:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3799542819/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh -n functional-322912 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1453786/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo cat /etc/test/nested/copy/1453786/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1453786.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo cat /etc/ssl/certs/1453786.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1453786.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo cat /usr/share/ca-certificates/1453786.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14537862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo cat /etc/ssl/certs/14537862.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14537862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo cat /usr/share/ca-certificates/14537862.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-322912 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh "sudo systemctl is-active docker": exit status 1 (342.493086ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh "sudo systemctl is-active containerd": exit status 1 (400.962413ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 version -o=json --components: (1.075126997s)
--- PASS: TestFunctional/parallel/Version/components (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-322912 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-322912
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-322912 image ls --format short --alsologtostderr:
I1005 21:26:39.464589 1480865 out.go:296] Setting OutFile to fd 1 ...
I1005 21:26:39.464752 1480865 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:39.464761 1480865 out.go:309] Setting ErrFile to fd 2...
I1005 21:26:39.464767 1480865 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:39.465074 1480865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
I1005 21:26:39.465816 1480865 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:39.465958 1480865 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:39.466440 1480865 cli_runner.go:164] Run: docker container inspect functional-322912 --format={{.State.Status}}
I1005 21:26:39.489084 1480865 ssh_runner.go:195] Run: systemctl --version
I1005 21:26:39.489137 1480865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-322912
I1005 21:26:39.515402 1480865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34087 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/functional-322912/id_rsa Username:docker}
I1005 21:26:39.611418 1480865 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-322912 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/my-image                      | functional-322912  | 0a596d0ba9629 | 1.64MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | alpine             | df8fd1ca35d66 | 45.3MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | 30bb499447fe1 | 121MB  |
| registry.k8s.io/kube-proxy              | v1.28.2            | 7da62c127fc0f | 69.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 64fc40cee3716 | 59.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | latest             | 2a4fbb36e9660 | 196MB  |
| gcr.io/google-containers/addon-resizer  | functional-322912  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 89d57b83c1786 | 117MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-322912 image ls --format table --alsologtostderr:
I1005 21:26:43.776752 1481216 out.go:296] Setting OutFile to fd 1 ...
I1005 21:26:43.776925 1481216 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:43.776932 1481216 out.go:309] Setting ErrFile to fd 2...
I1005 21:26:43.776938 1481216 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:43.777234 1481216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
I1005 21:26:43.777927 1481216 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:43.778071 1481216 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:43.778663 1481216 cli_runner.go:164] Run: docker container inspect functional-322912 --format={{.State.Status}}
I1005 21:26:43.800495 1481216 ssh_runner.go:195] Run: systemctl --version
I1005 21:26:43.800557 1481216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-322912
I1005 21:26:43.826940 1481216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34087 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/functional-322912/id_rsa Username:docker}
I1005 21:26:43.937708 1481216 ssh_runner.go:195] Run: sudo crictl images --output json
2023/10/05 21:26:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-322912 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196620"},{"id":"0a596d0ba9629a7027750b1d2f005d3c22157c6185e35962fdcaac1dab33de64","repoDigests":["localhost/my-image@sha256:81d7cc42a49f5e09ca134d1f343c153728adc5ec3241bc24fa5c247ca202
50c8"],"repoTags":["localhost/my-image:functional-322912"],"size":"1640225"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"117187380"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070
adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"59188020"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["regist
ry.k8s.io/pause:3.3"],"size":"487479"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":["docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef","docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45331256"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdc
ef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"121054158"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf","registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"69926807"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:70
31c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"1c88bcd495b4cfdd7e973fece01d3404f284ad80baea588b5d22de8080f77ce6","repoDigests":["docker.io/library/d19ca08916fe86d88628e8329d24bec2048ef03d67fb56f17bd7e50395654887-tmp@sha256:4a13a62d3ba1766eee1eea7ee3c78765f61d22e09a4ad61f415e654a1198c273"],"repoTags":[],"size":"1637643"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-322912"],"size":"34114467"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags
":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-322912 image ls --format json --alsologtostderr:
I1005 21:26:43.462595 1481188 out.go:296] Setting OutFile to fd 1 ...
I1005 21:26:43.462829 1481188 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:43.462856 1481188 out.go:309] Setting ErrFile to fd 2...
I1005 21:26:43.462878 1481188 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:43.463236 1481188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
I1005 21:26:43.464152 1481188 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:43.464361 1481188 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:43.465303 1481188 cli_runner.go:164] Run: docker container inspect functional-322912 --format={{.State.Status}}
I1005 21:26:43.490623 1481188 ssh_runner.go:195] Run: systemctl --version
I1005 21:26:43.490676 1481188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-322912
I1005 21:26:43.518545 1481188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34087 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/functional-322912/id_rsa Username:docker}
I1005 21:26:43.619547 1481188 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-322912 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324
repoTags:
- docker.io/library/nginx:latest
size: "196196620"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "117187380"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
- registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "69926807"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "121054158"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "59188020"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests:
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
- docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003
repoTags:
- docker.io/library/nginx:alpine
size: "45331256"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-322912
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-322912 image ls --format yaml --alsologtostderr:
I1005 21:26:39.721766 1480890 out.go:296] Setting OutFile to fd 1 ...
I1005 21:26:39.721931 1480890 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:39.721942 1480890 out.go:309] Setting ErrFile to fd 2...
I1005 21:26:39.721949 1480890 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:39.722281 1480890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
I1005 21:26:39.722941 1480890 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:39.723085 1480890 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:39.723689 1480890 cli_runner.go:164] Run: docker container inspect functional-322912 --format={{.State.Status}}
I1005 21:26:39.743061 1480890 ssh_runner.go:195] Run: systemctl --version
I1005 21:26:39.743115 1480890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-322912
I1005 21:26:39.761233 1480890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34087 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/functional-322912/id_rsa Username:docker}
I1005 21:26:39.859309 1480890 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh pgrep buildkitd: exit status 1 (338.580602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image build -t localhost/my-image:functional-322912 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 image build -t localhost/my-image:functional-322912 testdata/build --alsologtostderr: (2.824719564s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-322912 image build -t localhost/my-image:functional-322912 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1c88bcd495b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-322912
--> 0a596d0ba96
Successfully tagged localhost/my-image:functional-322912
0a596d0ba9629a7027750b1d2f005d3c22157c6185e35962fdcaac1dab33de64
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-322912 image build -t localhost/my-image:functional-322912 testdata/build --alsologtostderr:
I1005 21:26:40.313282 1480969 out.go:296] Setting OutFile to fd 1 ...
I1005 21:26:40.314104 1480969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:40.314119 1480969 out.go:309] Setting ErrFile to fd 2...
I1005 21:26:40.314126 1480969 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1005 21:26:40.314486 1480969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
I1005 21:26:40.315265 1480969 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:40.316012 1480969 config.go:182] Loaded profile config "functional-322912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1005 21:26:40.316708 1480969 cli_runner.go:164] Run: docker container inspect functional-322912 --format={{.State.Status}}
I1005 21:26:40.341557 1480969 ssh_runner.go:195] Run: systemctl --version
I1005 21:26:40.341615 1480969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-322912
I1005 21:26:40.365195 1480969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34087 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/functional-322912/id_rsa Username:docker}
I1005 21:26:40.463194 1480969 build_images.go:151] Building image from path: /tmp/build.2980171192.tar
I1005 21:26:40.463273 1480969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1005 21:26:40.474285 1480969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2980171192.tar
I1005 21:26:40.478902 1480969 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2980171192.tar: stat -c "%s %y" /var/lib/minikube/build/build.2980171192.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2980171192.tar': No such file or directory
I1005 21:26:40.478936 1480969 ssh_runner.go:362] scp /tmp/build.2980171192.tar --> /var/lib/minikube/build/build.2980171192.tar (3072 bytes)
I1005 21:26:40.515841 1480969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2980171192
I1005 21:26:40.526790 1480969 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2980171192 -xf /var/lib/minikube/build/build.2980171192.tar
I1005 21:26:40.538271 1480969 crio.go:297] Building image: /var/lib/minikube/build/build.2980171192
I1005 21:26:40.538344 1480969 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-322912 /var/lib/minikube/build/build.2980171192 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1005 21:26:43.016753 1480969 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-322912 /var/lib/minikube/build/build.2980171192 --cgroup-manager=cgroupfs: (2.478375282s)
I1005 21:26:43.016816 1480969 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2980171192
I1005 21:26:43.038325 1480969 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2980171192.tar
I1005 21:26:43.057168 1480969 build_images.go:207] Built localhost/my-image:functional-322912 from /tmp/build.2980171192.tar
I1005 21:26:43.057197 1480969 build_images.go:123] succeeded building to: functional-322912
I1005 21:26:43.057202 1480969 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.700221483s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-322912
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image load --daemon gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 image load --daemon gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr: (5.535738346s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-322912 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-322912 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-z5f4v" [f699d27a-246f-49e8-8673-98fabc8a7ebc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-z5f4v" [f699d27a-246f-49e8-8673-98fabc8a7ebc] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.030231488s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image load --daemon gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 image load --daemon gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr: (2.834024306s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.981483222s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-322912
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image load --daemon gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 image load --daemon gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr: (4.515231685s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 service list -o json
functional_test.go:1493: Took "410.833621ms" to run "out/minikube-linux-arm64 -p functional-322912 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30852
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30852
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-322912 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-322912 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-322912 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1477676: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-322912 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-322912 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-322912 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8b7942b9-314f-45cb-bdfa-0ed08a6c3af3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8b7942b9-314f-45cb-bdfa-0ed08a6c3af3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.015896129s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image save gcr.io/google-containers/addon-resizer:functional-322912 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 image save gcr.io/google-containers/addon-resizer:functional-322912 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.036118611s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image rm gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-322912 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (2.176116578s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-322912
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 image save --daemon gcr.io/google-containers/addon-resizer:functional-322912 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-322912
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-322912 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.1.136 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-322912 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "383.794135ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "56.643472ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "354.984117ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "59.892982ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdany-port1985529552/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696541185130392215" to /tmp/TestFunctionalparallelMountCmdany-port1985529552/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696541185130392215" to /tmp/TestFunctionalparallelMountCmdany-port1985529552/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696541185130392215" to /tmp/TestFunctionalparallelMountCmdany-port1985529552/001/test-1696541185130392215
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.521157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  5 21:26 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  5 21:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  5 21:26 test-1696541185130392215
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh cat /mount-9p/test-1696541185130392215
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-322912 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d5c0bf6e-9945-4e2a-9bcf-0eda9158ceac] Pending
helpers_test.go:344: "busybox-mount" [d5c0bf6e-9945-4e2a-9bcf-0eda9158ceac] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d5c0bf6e-9945-4e2a-9bcf-0eda9158ceac] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d5c0bf6e-9945-4e2a-9bcf-0eda9158ceac] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.021559579s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-322912 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdany-port1985529552/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdspecific-port1932968231/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.057492ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdspecific-port1932968231/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh "sudo umount -f /mount-9p": exit status 1 (294.788656ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-322912 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdspecific-port1932968231/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2654998828/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2654998828/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2654998828/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T" /mount1: exit status 1 (681.505715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-322912 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-322912 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2654998828/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2654998828/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-322912 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2654998828/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-322912
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-322912
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-322912
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (92.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-570164 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1005 21:27:54.598036 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:28:22.284938 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-570164 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m32.448049417s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (92.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-570164 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-570164 addons enable ingress --alsologtostderr -v=5: (11.008665599s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-570164 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-859140 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1005 21:32:11.869263 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:32:54.598004 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-859140 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m17.458555999s)
--- PASS: TestJSONOutput/start/Command (77.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-859140 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-859140 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-859140 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-859140 --output=json --user=testUser: (5.900713356s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-368978 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-368978 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.507099ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3666f719-add4-430a-9f10-a2b2d6a5adba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-368978] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"118b8f39-bd60-474b-af25-b9bac6dd41fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"432ef878-bb01-4a43-836e-fd7bc41123dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7fe9a0d7-09e6-4e47-b9ff-5c64f0a6946e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig"}}
	{"specversion":"1.0","id":"a1cc6090-fe29-4252-9bb0-f92ecbf5643f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube"}}
	{"specversion":"1.0","id":"b2c7e946-80b2-46f0-9b33-9c68f41f975c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a18f3345-a66a-480c-95fd-1f1212768ec4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a2d5f1a7-807b-4901-b2d8-f639848bc66f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-368978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-368978
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-351265 --network=
E1005 21:33:33.789494 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:33:37.312142 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:37.317412 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:37.327654 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:37.347896 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:37.388138 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:37.468406 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:37.628758 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:37.949041 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:38.589870 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:39.870070 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:42.430329 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:33:47.550530 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-351265 --network=: (42.626411126s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-351265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-351265
E1005 21:33:57.791225 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-351265: (2.027495367s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-585282 --network=bridge
E1005 21:34:18.272069 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-585282 --network=bridge: (31.774077043s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-585282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-585282
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-585282: (1.9980421s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.79s)

                                                
                                    
x
+
TestKicExistingNetwork (35.62s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-306500 --network=existing-network
E1005 21:34:59.233204 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-306500 --network=existing-network: (33.418550484s)
helpers_test.go:175: Cleaning up "existing-network-306500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-306500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-306500: (2.047449917s)
--- PASS: TestKicExistingNetwork (35.62s)

                                                
                                    
x
+
TestKicCustomSubnet (35.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-658149 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-658149 --subnet=192.168.60.0/24: (33.683091252s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-658149 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-658149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-658149
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-658149: (2.169368048s)
--- PASS: TestKicCustomSubnet (35.88s)

                                                
                                    
x
+
TestKicStaticIP (34.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-471370 --static-ip=192.168.200.200
E1005 21:35:49.946607 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-471370 --static-ip=192.168.200.200: (31.789591173s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-471370 ip
helpers_test.go:175: Cleaning up "static-ip-471370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-471370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-471370: (2.114345105s)
--- PASS: TestKicStaticIP (34.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-524548 --driver=docker  --container-runtime=crio
E1005 21:36:17.630161 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:36:21.153427 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-524548 --driver=docker  --container-runtime=crio: (29.546231627s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-527223 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-527223 --driver=docker  --container-runtime=crio: (32.717075328s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-524548
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-527223
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-527223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-527223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-527223: (2.028275213s)
helpers_test.go:175: Cleaning up "first-524548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-524548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-524548: (1.988155627s)
--- PASS: TestMinikubeProfile (67.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-255996 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-255996 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.020277997s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-255996 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-257908 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-257908 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.30957146s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-257908 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-255996 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-255996 --alsologtostderr -v=5: (1.688105929s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-257908 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-257908
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-257908: (1.217434485s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-257908
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-257908: (7.426463857s)
--- PASS: TestMountStart/serial/RestartStopped (8.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-257908 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814558 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1005 21:37:54.598670 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:38:37.312022 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:39:04.993895 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 21:39:17.645779 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814558 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m6.15254744s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-814558 -- rollout status deployment/busybox: (3.511074358s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-hrkj8 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-ztvv9 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-hrkj8 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-ztvv9 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-hrkj8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814558 -- exec busybox-5bc68d56bd-ztvv9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-814558 -v 3 --alsologtostderr
E1005 21:40:49.946261 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-814558 -v 3 --alsologtostderr: (47.063121892s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.81s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp testdata/cp-test.txt multinode-814558:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile172846852/001/cp-test_multinode-814558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558:/home/docker/cp-test.txt multinode-814558-m02:/home/docker/cp-test_multinode-814558_multinode-814558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m02 "sudo cat /home/docker/cp-test_multinode-814558_multinode-814558-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558:/home/docker/cp-test.txt multinode-814558-m03:/home/docker/cp-test_multinode-814558_multinode-814558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m03 "sudo cat /home/docker/cp-test_multinode-814558_multinode-814558-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp testdata/cp-test.txt multinode-814558-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile172846852/001/cp-test_multinode-814558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558-m02:/home/docker/cp-test.txt multinode-814558:/home/docker/cp-test_multinode-814558-m02_multinode-814558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558 "sudo cat /home/docker/cp-test_multinode-814558-m02_multinode-814558.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558-m02:/home/docker/cp-test.txt multinode-814558-m03:/home/docker/cp-test_multinode-814558-m02_multinode-814558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m03 "sudo cat /home/docker/cp-test_multinode-814558-m02_multinode-814558-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp testdata/cp-test.txt multinode-814558-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile172846852/001/cp-test_multinode-814558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558-m03:/home/docker/cp-test.txt multinode-814558:/home/docker/cp-test_multinode-814558-m03_multinode-814558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558 "sudo cat /home/docker/cp-test_multinode-814558-m03_multinode-814558.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 cp multinode-814558-m03:/home/docker/cp-test.txt multinode-814558-m02:/home/docker/cp-test_multinode-814558-m03_multinode-814558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 ssh -n multinode-814558-m02 "sudo cat /home/docker/cp-test_multinode-814558-m03_multinode-814558-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-814558 node stop m03: (1.257754662s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814558 status: exit status 7 (558.680922ms)

                                                
                                                
-- stdout --
	multinode-814558
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814558-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814558-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr: exit status 7 (548.498512ms)

                                                
                                                
-- stdout --
	multinode-814558
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814558-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814558-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:41:12.412555 1527882 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:41:12.412733 1527882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:41:12.412744 1527882 out.go:309] Setting ErrFile to fd 2...
	I1005 21:41:12.412750 1527882 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:41:12.413015 1527882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:41:12.413197 1527882 out.go:303] Setting JSON to false
	I1005 21:41:12.413240 1527882 mustload.go:65] Loading cluster: multinode-814558
	I1005 21:41:12.413375 1527882 notify.go:220] Checking for updates...
	I1005 21:41:12.413772 1527882 config.go:182] Loaded profile config "multinode-814558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:41:12.413788 1527882 status.go:255] checking status of multinode-814558 ...
	I1005 21:41:12.414292 1527882 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:41:12.434737 1527882 status.go:330] multinode-814558 host status = "Running" (err=<nil>)
	I1005 21:41:12.434758 1527882 host.go:66] Checking if "multinode-814558" exists ...
	I1005 21:41:12.435068 1527882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558
	I1005 21:41:12.453921 1527882 host.go:66] Checking if "multinode-814558" exists ...
	I1005 21:41:12.454224 1527882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:41:12.454276 1527882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558
	I1005 21:41:12.472725 1527882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34152 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558/id_rsa Username:docker}
	I1005 21:41:12.568413 1527882 ssh_runner.go:195] Run: systemctl --version
	I1005 21:41:12.574010 1527882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:41:12.587565 1527882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:41:12.664155 1527882 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-05 21:41:12.654077524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:41:12.664776 1527882 kubeconfig.go:92] found "multinode-814558" server: "https://192.168.58.2:8443"
	I1005 21:41:12.664795 1527882 api_server.go:166] Checking apiserver status ...
	I1005 21:41:12.664838 1527882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1005 21:41:12.678806 1527882 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1279/cgroup
	I1005 21:41:12.690619 1527882 api_server.go:182] apiserver freezer: "6:freezer:/docker/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/crio/crio-e5179ec2a1297987cb6c9fca05717ff809ef4c2adb9643061f05de8f1336b32b"
	I1005 21:41:12.690689 1527882 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/058ddd99bc476f2905c1984209d75bf2f225ec79d4e30b3c20b3f7d1d6fa1347/crio/crio-e5179ec2a1297987cb6c9fca05717ff809ef4c2adb9643061f05de8f1336b32b/freezer.state
	I1005 21:41:12.701297 1527882 api_server.go:204] freezer state: "THAWED"
	I1005 21:41:12.701327 1527882 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1005 21:41:12.710934 1527882 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1005 21:41:12.710969 1527882 status.go:421] multinode-814558 apiserver status = Running (err=<nil>)
	I1005 21:41:12.710980 1527882 status.go:257] multinode-814558 status: &{Name:multinode-814558 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 21:41:12.710999 1527882 status.go:255] checking status of multinode-814558-m02 ...
	I1005 21:41:12.711357 1527882 cli_runner.go:164] Run: docker container inspect multinode-814558-m02 --format={{.State.Status}}
	I1005 21:41:12.731151 1527882 status.go:330] multinode-814558-m02 host status = "Running" (err=<nil>)
	I1005 21:41:12.731178 1527882 host.go:66] Checking if "multinode-814558-m02" exists ...
	I1005 21:41:12.731480 1527882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814558-m02
	I1005 21:41:12.751334 1527882 host.go:66] Checking if "multinode-814558-m02" exists ...
	I1005 21:41:12.751676 1527882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1005 21:41:12.751740 1527882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814558-m02
	I1005 21:41:12.770464 1527882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/17363-1448442/.minikube/machines/multinode-814558-m02/id_rsa Username:docker}
	I1005 21:41:12.867521 1527882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1005 21:41:12.881605 1527882 status.go:257] multinode-814558-m02 status: &{Name:multinode-814558-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1005 21:41:12.881650 1527882 status.go:255] checking status of multinode-814558-m03 ...
	I1005 21:41:12.881989 1527882 cli_runner.go:164] Run: docker container inspect multinode-814558-m03 --format={{.State.Status}}
	I1005 21:41:12.900347 1527882 status.go:330] multinode-814558-m03 host status = "Stopped" (err=<nil>)
	I1005 21:41:12.900370 1527882 status.go:343] host is not running, skipping remaining checks
	I1005 21:41:12.900377 1527882 status.go:257] multinode-814558-m03 status: &{Name:multinode-814558-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-814558 node start m03 --alsologtostderr: (11.623237461s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-814558
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-814558
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-814558: (25.063679871s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814558 --wait=true -v=8 --alsologtostderr
E1005 21:42:54.598880 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814558 --wait=true -v=8 --alsologtostderr: (1m38.460523303s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-814558
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-814558 node delete m03: (4.371598047s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 stop
E1005 21:43:37.312234 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-814558 stop: (23.892536265s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814558 status: exit status 7 (87.973751ms)

                                                
                                                
-- stdout --
	multinode-814558
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814558-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr: exit status 7 (94.999314ms)

                                                
                                                
-- stdout --
	multinode-814558
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814558-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:43:58.221880 1535976 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:43:58.222087 1535976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:43:58.222097 1535976 out.go:309] Setting ErrFile to fd 2...
	I1005 21:43:58.222103 1535976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:43:58.222375 1535976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:43:58.222544 1535976 out.go:303] Setting JSON to false
	I1005 21:43:58.222605 1535976 mustload.go:65] Loading cluster: multinode-814558
	I1005 21:43:58.222725 1535976 notify.go:220] Checking for updates...
	I1005 21:43:58.223013 1535976 config.go:182] Loaded profile config "multinode-814558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:43:58.223032 1535976 status.go:255] checking status of multinode-814558 ...
	I1005 21:43:58.223522 1535976 cli_runner.go:164] Run: docker container inspect multinode-814558 --format={{.State.Status}}
	I1005 21:43:58.244233 1535976 status.go:330] multinode-814558 host status = "Stopped" (err=<nil>)
	I1005 21:43:58.244253 1535976 status.go:343] host is not running, skipping remaining checks
	I1005 21:43:58.244260 1535976 status.go:257] multinode-814558 status: &{Name:multinode-814558 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1005 21:43:58.244298 1535976 status.go:255] checking status of multinode-814558-m02 ...
	I1005 21:43:58.244595 1535976 cli_runner.go:164] Run: docker container inspect multinode-814558-m02 --format={{.State.Status}}
	I1005 21:43:58.267559 1535976 status.go:330] multinode-814558-m02 host status = "Stopped" (err=<nil>)
	I1005 21:43:58.267579 1535976 status.go:343] host is not running, skipping remaining checks
	I1005 21:43:58.267586 1535976 status.go:257] multinode-814558-m02 status: &{Name:multinode-814558-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814558 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814558 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.131477311s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814558 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-814558
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814558-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-814558-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.41653ms)

                                                
                                                
-- stdout --
	* [multinode-814558-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-814558-m02' is duplicated with machine name 'multinode-814558-m02' in profile 'multinode-814558'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814558-m03 --driver=docker  --container-runtime=crio
E1005 21:45:49.946733 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814558-m03 --driver=docker  --container-runtime=crio: (33.738181892s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-814558
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-814558: exit status 80 (357.956849ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-814558
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-814558-m03 already exists in multinode-814558-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-814558-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-814558-m03: (2.03344232s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.26s)

                                                
                                    
x
+
TestPreload (177.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-581582 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1005 21:47:12.991210 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-581582 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m26.367563913s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-581582 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-581582 image pull gcr.io/k8s-minikube/busybox: (2.219511716s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-581582
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-581582: (5.842816909s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-581582 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1005 21:47:54.598386 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 21:48:37.312144 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-581582 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m20.189548099s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-581582 image list
helpers_test.go:175: Cleaning up "test-preload-581582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-581582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-581582: (2.473750348s)
--- PASS: TestPreload (177.34s)

                                                
                                    
x
+
TestScheduledStopUnix (110.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-353456 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-353456 --memory=2048 --driver=docker  --container-runtime=crio: (33.555983376s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-353456 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-353456 -n scheduled-stop-353456
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-353456 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-353456 --cancel-scheduled
E1005 21:50:00.356038 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-353456 -n scheduled-stop-353456
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-353456
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-353456 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-353456
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-353456: exit status 7 (70.584053ms)

                                                
                                                
-- stdout --
	scheduled-stop-353456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-353456 -n scheduled-stop-353456
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-353456 -n scheduled-stop-353456: exit status 7 (72.292187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-353456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-353456
E1005 21:50:49.946318 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-353456: (4.758014534s)
--- PASS: TestScheduledStopUnix (110.03s)

                                                
                                    
x
+
TestInsufficientStorage (13.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-727062 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-727062 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.879962s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44644d5c-3551-40ee-bcca-3e23f46de980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-727062] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac293d9b-80a0-47ad-997b-72fb21112754","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17363"}}
	{"specversion":"1.0","id":"5d2fd65a-410c-4cc2-a053-47d9dedbec54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"455e33e8-9c0e-4d3b-a84c-afbc708d67f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig"}}
	{"specversion":"1.0","id":"856f2305-9c05-4e80-a8ac-96ec4ae737d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube"}}
	{"specversion":"1.0","id":"93546eb3-2e41-49d1-9a7a-657c5a94f7a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8cc0cc78-b7e0-4ad9-8b0d-b02ebd6f2ce4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6ff81d32-1411-4834-bca4-a93cae406e9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"48e4cf87-8d10-4fe4-b22e-506a55637d0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d0b22620-f5e7-4b8a-8274-a23d0194c0fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7f582414-5e8c-40f4-a886-0eac8e5cd5c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c0c20c28-298d-4daa-a2da-a9f6b2364a35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-727062 in cluster insufficient-storage-727062","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"802dffab-9525-47e7-b21e-140d848c92f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a862c04-6808-4cff-9204-d2e497aa905b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3dd015b-f102-48aa-8d79-3748e501c7a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-727062 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-727062 --output=json --layout=cluster: exit status 7 (306.05119ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-727062","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-727062","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 21:51:04.642436 1552540 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-727062" does not appear in /home/jenkins/minikube-integration/17363-1448442/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-727062 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-727062 --output=json --layout=cluster: exit status 7 (313.703454ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-727062","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-727062","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1005 21:51:04.954330 1552592 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-727062" does not appear in /home/jenkins/minikube-integration/17363-1448442/kubeconfig
	E1005 21:51:04.966996 1552592 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/insufficient-storage-727062/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-727062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-727062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-727062: (1.94784809s)
--- PASS: TestInsufficientStorage (13.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (425.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.905212788s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-645476
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-645476: (1.356406054s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-645476 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-645476 status --format={{.Host}}: exit status 7 (73.136573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1005 21:55:49.946546 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 21:55:57.646396 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m49.356716996s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-645476 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (212.375071ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-645476] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-645476
	    minikube start -p kubernetes-upgrade-645476 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6454762 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-645476 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1005 22:00:49.946277 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-645476 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.554050317s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-645476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-645476
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-645476: (3.901278628s)
--- PASS: TestKubernetesUpgrade (425.52s)

                                                
                                    
x
+
TestPause/serial/Start (57.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-235090 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-235090 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (57.714839627s)
--- PASS: TestPause/serial/Start (57.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-834646 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-834646 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (93.857344ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-834646] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-834646 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-834646 --driver=docker  --container-runtime=crio: (42.105576787s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-834646 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-834646 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-834646 --no-kubernetes --driver=docker  --container-runtime=crio: (4.420837016s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-834646 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-834646 status -o json: exit status 2 (361.709366ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-834646","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-834646
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-834646: (2.055132621s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-834646 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-834646 --no-kubernetes --driver=docker  --container-runtime=crio: (9.791722111s)
--- PASS: TestNoKubernetes/serial/Start (9.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-834646 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-834646 "sudo systemctl is-active --quiet service kubelet": exit status 1 (388.579545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-834646
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-834646: (1.244814985s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-834646 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-834646 --driver=docker  --container-runtime=crio: (7.716610245s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-834646 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-834646 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.136294ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-798214 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-798214 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (278.645596ms)

                                                
                                                
-- stdout --
	* [false-798214] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1005 21:52:22.674430 1562689 out.go:296] Setting OutFile to fd 1 ...
	I1005 21:52:22.674664 1562689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:52:22.674674 1562689 out.go:309] Setting ErrFile to fd 2...
	I1005 21:52:22.674680 1562689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1005 21:52:22.674940 1562689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17363-1448442/.minikube/bin
	I1005 21:52:22.675403 1562689 out.go:303] Setting JSON to false
	I1005 21:52:22.676422 1562689 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":27290,"bootTime":1696515453,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1005 21:52:22.676496 1562689 start.go:138] virtualization:  
	I1005 21:52:22.679103 1562689 out.go:177] * [false-798214] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1005 21:52:22.680943 1562689 out.go:177]   - MINIKUBE_LOCATION=17363
	I1005 21:52:22.682780 1562689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1005 21:52:22.681107 1562689 notify.go:220] Checking for updates...
	I1005 21:52:22.687192 1562689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17363-1448442/kubeconfig
	I1005 21:52:22.688942 1562689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17363-1448442/.minikube
	I1005 21:52:22.690480 1562689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1005 21:52:22.692359 1562689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1005 21:52:22.695638 1562689 config.go:182] Loaded profile config "pause-235090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1005 21:52:22.695732 1562689 driver.go:378] Setting default libvirt URI to qemu:///system
	I1005 21:52:22.735477 1562689 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1005 21:52:22.735566 1562689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1005 21:52:22.888207 1562689 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-05 21:52:22.873305199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1005 21:52:22.888326 1562689 docker.go:294] overlay module found
	I1005 21:52:22.891575 1562689 out.go:177] * Using the docker driver based on user configuration
	I1005 21:52:22.893215 1562689 start.go:298] selected driver: docker
	I1005 21:52:22.893239 1562689 start.go:902] validating driver "docker" against <nil>
	I1005 21:52:22.893253 1562689 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1005 21:52:22.895752 1562689 out.go:177] 
	W1005 21:52:22.897620 1562689 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1005 21:52:22.899704 1562689 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-798214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-798214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:51:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-235090
contexts:
- context:
cluster: pause-235090
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:51:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-235090
name: pause-235090
current-context: pause-235090
kind: Config
preferences: {}
users:
- name: pause-235090
user:
client-certificate: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.crt
client-key: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-798214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798214"

                                                
                                                
----------------------- debugLogs end: false-798214 [took: 3.638832408s] --------------------------------
helpers_test.go:175: Cleaning up "false-798214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-798214
--- PASS: TestNetworkPlugins/group/false (4.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.759801821s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-760371
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1005 22:02:54.598539 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (56.855154251s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-798214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-798214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jpl86" [f1de76dc-be60-46a1-930e-a694fc60ae53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jpl86" [f1de76dc-be60-46a1-930e-a694fc60ae53] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.012543856s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-798214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9bxjx" [46ddfeeb-f1cc-4b6e-aaa8-f9a9c18c0e39] Running
E1005 22:03:37.312044 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.046189871s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-798214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-798214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fldl5" [e18adf79-0eef-47d8-b05c-aa0b2b8b7fd9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fldl5" [e18adf79-0eef-47d8-b05c-aa0b2b8b7fd9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.012297612s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m14.29397176s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-798214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m15.256381576s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4mc9z" [c76ecf5a-ba89-4c15-9576-3dc518459b55] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.068508879s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-798214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-798214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q2lg7" [a0a149a3-1efe-4edf-a6d1-a570f193087e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q2lg7" [a0a149a3-1efe-4edf-a6d1-a570f193087e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.021495562s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-798214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-798214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-798214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2mmb9" [9ad7d436-3903-4ff0-80b1-7240fce2fa36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2mmb9" [9ad7d436-3903-4ff0-80b1-7240fce2fa36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.015040011s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m34.241746538s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-798214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1005 22:06:40.357142 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.890747904s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-798214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-798214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vk76n" [c74ec783-0ce6-4361-918c-7bdf1d741440] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vk76n" [c74ec783-0ce6-4361-918c-7bdf1d741440] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010386081s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kr2xq" [097e2a0e-6a38-4169-bd94-58d94e263a71] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.033996433s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-798214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-798214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-798214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j4dxs" [77373b25-c374-4d8c-ae4d-58339d4729a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-j4dxs" [77373b25-c374-4d8c-ae4d-58339d4729a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.016789698s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-798214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1005 22:07:54.599897 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-798214 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.395425156s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-679346 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1005 22:08:06.520055 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:08:07.801069 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:08:10.362146 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:08:15.482805 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:08:25.723729 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:08:33.019337 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:33.024597 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:33.034849 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:33.055050 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:33.095326 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:33.176020 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:33.336393 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:33.656721 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:34.297087 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:35.577452 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:37.311641 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 22:08:38.138512 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:43.258715 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:08:46.204294 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:08:53.499660 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:09:13.979882 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-679346 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m16.172422126s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-798214 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-798214 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xff6p" [e78f8942-5813-46d9-812b-f7dce5021098] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xff6p" [e78f8942-5813-46d9-812b-f7dce5021098] Running
E1005 22:09:27.164658 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.011413662s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-798214 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-798214 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)
E1005 22:27:54.598067 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 22:28:05.237002 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:28:33.019422 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:28:35.962929 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:28:37.312471 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 22:28:41.677123 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/no-preload-922879/client.crt: no such file or directory
E1005 22:28:44.551636 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:29:16.801448 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:29:17.647009 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-922879 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:09:54.359688 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:54.365268 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:54.375360 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:54.395977 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:54.436249 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:54.516491 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:54.677698 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:54.940325 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:09:54.998614 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:55.638966 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:56.919361 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:09:59.480175 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:10:04.601220 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:10:14.841880 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-922879 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m7.477567845s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-679346 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [543b6a61-0f79-4d9c-a077-6ffbc4cda5c4] Pending
helpers_test.go:344: "busybox" [543b6a61-0f79-4d9c-a077-6ffbc4cda5c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [543b6a61-0f79-4d9c-a077-6ffbc4cda5c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.033982279s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-679346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-679346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-679346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.445719666s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-679346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-679346 --alsologtostderr -v=3
E1005 22:10:34.869662 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:34.874977 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:34.885766 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:34.906198 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:34.946302 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:35.026903 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:35.187115 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:35.323051 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:10:35.507890 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:36.148949 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:37.429235 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:39.990340 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:10:45.111469 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-679346 --alsologtostderr -v=3: (12.471794611s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-679346 -n old-k8s-version-679346
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-679346 -n old-k8s-version-679346: exit status 7 (79.691543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-679346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (449.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-679346 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1005 22:10:49.085461 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:10:49.946595 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 22:10:55.352150 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-679346 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m28.86439329s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-679346 -n old-k8s-version-679346
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (449.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-922879 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [957a1d1a-751a-4244-af30-fc6ef0417f98] Pending
helpers_test.go:344: "busybox" [957a1d1a-751a-4244-af30-fc6ef0417f98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [957a1d1a-751a-4244-af30-fc6ef0417f98] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.042953434s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-922879 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-922879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-922879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.10301905s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-922879 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-922879 --alsologtostderr -v=3
E1005 22:11:15.832988 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:11:16.283669 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:11:16.861484 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-922879 --alsologtostderr -v=3: (12.299883402s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-922879 -n no-preload-922879
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-922879 -n no-preload-922879: exit status 7 (157.821792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-922879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (636.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-922879 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:11:56.793556 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:12:12.919746 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:12.925047 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:12.935402 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:12.955665 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:12.995947 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:13.076332 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:13.237386 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:13.557925 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:14.198539 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:15.478743 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:18.038966 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:21.507700 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:21.513063 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:21.523363 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:21.543857 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:21.584205 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:21.664554 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:21.824967 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:22.145556 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:22.786105 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:23.159781 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:24.066386 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:26.626634 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:31.747584 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:33.400860 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:37.646622 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 22:12:38.204470 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:12:41.988488 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:12:53.881085 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:12:54.598288 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 22:13:02.468815 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:13:05.237616 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:13:18.714339 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:13:32.926480 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:13:33.019885 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:13:34.841423 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:13:37.311655 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 22:13:43.429904 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:14:00.702338 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:14:16.802044 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:16.807361 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:16.817686 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:16.837953 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:16.878218 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:16.958569 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:17.119171 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:17.439746 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:18.080916 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:19.361813 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:21.922533 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:27.042846 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:37.283914 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:14:54.359976 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:14:56.762268 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:14:57.764576 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:15:05.351097 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:15:22.044740 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:15:34.870411 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:15:38.724751 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:15:49.946607 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 22:16:02.554597 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:17:00.645006 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:17:12.920225 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:17:21.507286 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:17:40.602683 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
E1005 22:17:49.191333 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:17:54.598309 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 22:18:05.236849 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-922879 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (10m36.189259315s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-922879 -n no-preload-922879
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (636.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9888k" [db060fe0-ca56-4b7b-b4ca-98e93d2b7fa2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023596386s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9888k" [db060fe0-ca56-4b7b-b4ca-98e93d2b7fa2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009325619s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-679346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-679346 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-679346 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-679346 --alsologtostderr -v=1: (1.051803493s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-679346 -n old-k8s-version-679346
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-679346 -n old-k8s-version-679346: exit status 2 (379.268797ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-679346 -n old-k8s-version-679346
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-679346 -n old-k8s-version-679346: exit status 2 (361.392808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-679346 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-679346 -n old-k8s-version-679346
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-679346 -n old-k8s-version-679346
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-509888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:18:33.019943 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:18:37.312020 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
E1005 22:19:16.802099 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-509888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (49.374996144s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-509888 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6bc085d3-9864-451d-aaee-31bf38425ef0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6bc085d3-9864-451d-aaee-31bf38425ef0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.026777882s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-509888 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-509888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-509888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.174547198s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-509888 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-509888 --alsologtostderr -v=3
E1005 22:19:44.486039 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-509888 --alsologtostderr -v=3: (12.136760717s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-509888 -n embed-certs-509888
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-509888 -n embed-certs-509888: exit status 7 (74.263799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-509888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-509888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:19:54.359572 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:20:22.639894 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:22.645145 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:22.655444 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:22.675769 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:22.716019 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:22.796284 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:22.956743 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:23.277441 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:23.918230 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:25.198488 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:27.759387 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:32.879990 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:32.992394 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 22:20:34.870560 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:20:43.120627 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:20:49.946298 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 22:21:03.600863 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:21:44.561093 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-509888 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m43.464710664s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-509888 -n embed-certs-509888
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nslpn" [08345e3d-69f2-4d8c-820c-93a8390646f3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.031564047s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nslpn" [08345e3d-69f2-4d8c-820c-93a8390646f3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011381011s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-922879 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-922879 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-922879 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-922879 -n no-preload-922879
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-922879 -n no-preload-922879: exit status 2 (362.331062ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-922879 -n no-preload-922879
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-922879 -n no-preload-922879: exit status 2 (380.296416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-922879 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-922879 -n no-preload-922879
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-922879 -n no-preload-922879
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-810849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:22:21.506859 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/flannel-798214/client.crt: no such file or directory
E1005 22:22:54.599036 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/addons-792068/client.crt: no such file or directory
E1005 22:23:05.237162 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:23:06.481277 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
E1005 22:23:20.357554 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-810849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m17.954355334s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-810849 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ef43113d-3850-40b9-810b-2356f76aa48f] Pending
helpers_test.go:344: "busybox" [ef43113d-3850-40b9-810b-2356f76aa48f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1005 22:23:33.019049 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ef43113d-3850-40b9-810b-2356f76aa48f] Running
E1005 22:23:37.311947 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/ingress-addon-legacy-570164/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.040660184s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-810849 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-810849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-810849 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.091202452s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-810849 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-810849 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-810849 --alsologtostderr -v=3: (12.128465904s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849: exit status 7 (82.826097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-810849 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-810849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:24:16.801479 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/bridge-798214/client.crt: no such file or directory
E1005 22:24:28.286696 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/auto-798214/client.crt: no such file or directory
E1005 22:24:54.359382 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:24:56.063376 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/kindnet-798214/client.crt: no such file or directory
E1005 22:25:22.639699 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-810849 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m46.627673373s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-27vwc" [8fb1fbdc-fbbf-4a4f-aec4-74c74947ff51] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1005 22:25:34.870568 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-27vwc" [8fb1fbdc-fbbf-4a4f-aec4-74c74947ff51] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.02884442s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-27vwc" [8fb1fbdc-fbbf-4a4f-aec4-74c74947ff51] Running
E1005 22:25:49.946243 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/functional-322912/client.crt: no such file or directory
E1005 22:25:50.321461 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/old-k8s-version-679346/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010507336s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-509888 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-509888 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-509888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-509888 -n embed-certs-509888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-509888 -n embed-certs-509888: exit status 2 (363.457176ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-509888 -n embed-certs-509888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-509888 -n embed-certs-509888: exit status 2 (365.62569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-509888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-509888 -n embed-certs-509888
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-509888 -n embed-certs-509888
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-074977 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:26:00.394009 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/no-preload-922879/client.crt: no such file or directory
E1005 22:26:02.954371 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/no-preload-922879/client.crt: no such file or directory
E1005 22:26:08.075510 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/no-preload-922879/client.crt: no such file or directory
E1005 22:26:17.405252 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
E1005 22:26:18.316072 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/no-preload-922879/client.crt: no such file or directory
E1005 22:26:38.796397 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/no-preload-922879/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-074977 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (43.992773572s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-074977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-074977 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092661252s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-074977 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-074977 --alsologtostderr -v=3: (1.321519008s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-074977 -n newest-cni-074977
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-074977 -n newest-cni-074977: exit status 7 (80.031062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-074977 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-074977 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1005 22:26:57.914846 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/custom-flannel-798214/client.crt: no such file or directory
E1005 22:27:12.920141 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/enable-default-cni-798214/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-074977 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (31.149093041s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-074977 -n newest-cni-074977
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-074977 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-074977 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-074977 -n newest-cni-074977
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-074977 -n newest-cni-074977: exit status 2 (380.533128ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-074977 -n newest-cni-074977
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-074977 -n newest-cni-074977: exit status 2 (365.60281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-074977 --alsologtostderr -v=1
E1005 22:27:19.756611 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/no-preload-922879/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-074977 -n newest-cni-074977
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-074977 -n newest-cni-074977
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k9hfj" [3d36401b-9437-47f3-a618-017d4853c160] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k9hfj" [3d36401b-9437-47f3-a618-017d4853c160] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.035631159s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k9hfj" [3d36401b-9437-47f3-a618-017d4853c160] Running
E1005 22:29:54.360267 1453786 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/calico-798214/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010604484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-810849 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-810849 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-810849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849: exit status 2 (334.553809ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849: exit status 2 (346.508327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-810849 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-810849 -n default-k8s-diff-port-810849
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.36s)

                                                
                                    

Test skip (29/301)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.63s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-717480 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-717480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-717480
--- SKIP: TestDownloadOnlyKic (0.63s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:442: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:496: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-798214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-798214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:51:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-235090
contexts:
- context:
cluster: pause-235090
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:51:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-235090
name: pause-235090
current-context: pause-235090
kind: Config
preferences: {}
users:
- name: pause-235090
user:
client-certificate: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.crt
client-key: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-798214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798214"

                                                
                                                
----------------------- debugLogs end: kubenet-798214 [took: 3.70329925s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-798214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-798214
--- SKIP: TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-798214 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-798214" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17363-1448442/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:51:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-235090
contexts:
- context:
cluster: pause-235090
extensions:
- extension:
last-update: Thu, 05 Oct 2023 21:51:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-235090
name: pause-235090
current-context: pause-235090
kind: Config
preferences: {}
users:
- name: pause-235090
user:
client-certificate: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.crt
client-key: /home/jenkins/minikube-integration/17363-1448442/.minikube/profiles/pause-235090/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-798214

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-798214" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798214"

                                                
                                                
----------------------- debugLogs end: cilium-798214 [took: 5.30056755s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-798214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-798214
--- SKIP: TestNetworkPlugins/group/cilium (5.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-833266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-833266
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard